00:00:00.002 Started by upstream project "autotest-nightly" build number 3917 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3292 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.232 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.232 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.566 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.575 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.585 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:06.585 > git config core.sparsecheckout # timeout=10 00:00:06.595 > git read-tree -mu HEAD # timeout=10 00:00:06.609 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:06.632 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:06.632 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:06.710 [Pipeline] Start of Pipeline 00:00:06.721 [Pipeline] library 00:00:06.722 Loading library shm_lib@master 00:00:06.723 Library shm_lib@master is cached. Copying from home. 00:00:06.738 [Pipeline] node 00:00:06.754 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.755 [Pipeline] { 00:00:06.764 [Pipeline] catchError 00:00:06.765 [Pipeline] { 00:00:06.775 [Pipeline] wrap 00:00:06.781 [Pipeline] { 00:00:06.787 [Pipeline] stage 00:00:06.788 [Pipeline] { (Prologue) 00:00:07.015 [Pipeline] sh 00:00:07.296 + logger -p user.info -t JENKINS-CI 00:00:07.316 [Pipeline] echo 00:00:07.318 Node: WFP21 00:00:07.323 [Pipeline] sh 00:00:07.615 [Pipeline] setCustomBuildProperty 00:00:07.624 [Pipeline] echo 00:00:07.624 Cleanup processes 00:00:07.629 [Pipeline] sh 00:00:07.909 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.909 1313210 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.922 [Pipeline] sh 00:00:08.208 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.208 ++ grep -v 'sudo pgrep' 00:00:08.208 ++ awk '{print $1}' 00:00:08.208 + sudo kill -9 00:00:08.208 + true 00:00:08.222 [Pipeline] cleanWs 00:00:08.232 [WS-CLEANUP] Deleting project workspace... 00:00:08.232 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.238 [WS-CLEANUP] done 00:00:08.242 [Pipeline] setCustomBuildProperty 00:00:08.255 [Pipeline] sh 00:00:08.537 + sudo git config --global --replace-all safe.directory '*' 00:00:08.627 [Pipeline] httpRequest 00:00:08.662 [Pipeline] echo 00:00:08.663 Sorcerer 10.211.164.101 is alive 00:00:08.672 [Pipeline] httpRequest 00:00:08.676 HttpMethod: GET 00:00:08.677 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:08.678 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:08.694 Response Code: HTTP/1.1 200 OK 00:00:08.694 Success: Status code 200 is in the accepted range: 200,404 00:00:08.695 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:11.714 [Pipeline] sh 00:00:11.997 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:12.013 [Pipeline] httpRequest 00:00:12.044 [Pipeline] echo 00:00:12.046 Sorcerer 10.211.164.101 is alive 00:00:12.054 [Pipeline] httpRequest 00:00:12.058 HttpMethod: GET 00:00:12.059 URL: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:12.059 Sending request to url: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:12.061 Response Code: HTTP/1.1 200 OK 00:00:12.062 Success: Status code 200 is in the accepted range: 200,404 00:00:12.062 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:35.289 [Pipeline] sh 00:00:35.572 + tar --no-same-owner -xf spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:38.120 [Pipeline] sh 00:00:38.403 + git -C spdk log --oneline -n5 00:00:38.403 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:38.403 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:38.403 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:00:38.403 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:00:38.403 084afa904 util: copy errno before calling stdlib's functions 00:00:38.416 [Pipeline] } 00:00:38.433 [Pipeline] // stage 00:00:38.440 [Pipeline] stage 00:00:38.442 [Pipeline] { (Prepare) 00:00:38.456 [Pipeline] writeFile 00:00:38.472 [Pipeline] sh 00:00:38.762 + logger -p user.info -t JENKINS-CI 00:00:38.774 [Pipeline] sh 00:00:39.057 + logger -p user.info -t JENKINS-CI 00:00:39.068 [Pipeline] sh 00:00:39.348 + cat autorun-spdk.conf 00:00:39.348 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.348 SPDK_TEST_NVMF=1 00:00:39.348 SPDK_TEST_NVME_CLI=1 00:00:39.348 SPDK_TEST_NVMF_NICS=mlx5 00:00:39.348 SPDK_RUN_ASAN=1 00:00:39.348 SPDK_RUN_UBSAN=1 00:00:39.348 NET_TYPE=phy 00:00:39.355 RUN_NIGHTLY=1 00:00:39.359 [Pipeline] readFile 00:00:39.383 [Pipeline] withEnv 00:00:39.385 [Pipeline] { 00:00:39.398 [Pipeline] sh 00:00:39.681 + set -ex 00:00:39.681 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:39.681 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:39.681 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.681 ++ SPDK_TEST_NVMF=1 00:00:39.681 ++ SPDK_TEST_NVME_CLI=1 00:00:39.681 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:39.681 ++ SPDK_RUN_ASAN=1 00:00:39.681 ++ SPDK_RUN_UBSAN=1 00:00:39.681 ++ NET_TYPE=phy 00:00:39.681 ++ RUN_NIGHTLY=1 00:00:39.681 + case $SPDK_TEST_NVMF_NICS in 00:00:39.681 + DRIVERS=mlx5_ib 00:00:39.681 + [[ -n mlx5_ib ]] 00:00:39.681 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:39.681 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:46.247 rmmod: ERROR: Module irdma is not currently loaded 00:00:46.247 rmmod: ERROR: Module i40iw is not currently loaded 00:00:46.247 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:46.247 + true 00:00:46.247 + for D in $DRIVERS 00:00:46.247 + sudo modprobe mlx5_ib 00:00:46.247 + exit 0 00:00:46.257 [Pipeline] } 00:00:46.275 [Pipeline] // withEnv 00:00:46.280 [Pipeline] } 00:00:46.297 [Pipeline] // stage 00:00:46.306 [Pipeline] catchError 00:00:46.308 [Pipeline] { 00:00:46.323 [Pipeline] timeout 00:00:46.323 Timeout set to expire in 1 hr 0 min 00:00:46.325 [Pipeline] { 00:00:46.340 [Pipeline] stage 00:00:46.342 [Pipeline] { (Tests) 00:00:46.357 [Pipeline] sh 00:00:46.641 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:46.642 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:46.642 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:46.642 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:46.642 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:46.642 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:46.642 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:46.642 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:46.642 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:46.642 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:46.642 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:00:46.642 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:46.642 + source /etc/os-release 00:00:46.642 ++ NAME='Fedora Linux' 00:00:46.642 ++ VERSION='38 (Cloud Edition)' 00:00:46.642 ++ ID=fedora 00:00:46.642 ++ VERSION_ID=38 00:00:46.642 ++ VERSION_CODENAME= 00:00:46.642 ++ PLATFORM_ID=platform:f38 00:00:46.642 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:46.642 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:46.642 ++ LOGO=fedora-logo-icon 00:00:46.642 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:46.642 ++ HOME_URL=https://fedoraproject.org/ 00:00:46.642 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:46.642 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:46.642 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:46.642 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:46.642 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:46.642 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:46.642 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:46.642 ++ SUPPORT_END=2024-05-14 00:00:46.642 ++ VARIANT='Cloud Edition' 00:00:46.642 ++ VARIANT_ID=cloud 00:00:46.642 + uname -a 00:00:46.642 Linux spdk-wfp-21 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:46.642 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:49.932 Hugepages 00:00:49.932 node hugesize free / total 00:00:49.932 node0 1048576kB 0 / 0 00:00:49.932 node0 2048kB 0 / 0 00:00:50.192 node1 1048576kB 0 / 0 00:00:50.192 node1 2048kB 0 / 0 00:00:50.192 00:00:50.192 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:50.192 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:50.192 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:50.192 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:50.192 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:50.192 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:50.192 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:50.192 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:50.192 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:50.192 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:50.192 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:50.192 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:50.192 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:50.192 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:50.192 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:50.192 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:50.192 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:50.192 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:50.192 + rm -f /tmp/spdk-ld-path 00:00:50.192 + source autorun-spdk.conf 00:00:50.192 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.192 ++ SPDK_TEST_NVMF=1 00:00:50.192 ++ SPDK_TEST_NVME_CLI=1 00:00:50.192 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:50.192 ++ SPDK_RUN_ASAN=1 00:00:50.192 ++ SPDK_RUN_UBSAN=1 00:00:50.192 ++ NET_TYPE=phy 00:00:50.192 ++ RUN_NIGHTLY=1 00:00:50.192 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:50.192 + [[ -n '' ]] 00:00:50.192 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:50.192 + for M in /var/spdk/build-*-manifest.txt 00:00:50.192 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:50.192 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:50.452 + for M in /var/spdk/build-*-manifest.txt 00:00:50.452 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:50.452 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:50.452 ++ uname 00:00:50.452 + [[ Linux == \L\i\n\u\x ]] 00:00:50.452 + sudo dmesg -T 00:00:50.452 + sudo dmesg --clear 00:00:50.452 + dmesg_pid=1314294 00:00:50.452 + [[ Fedora Linux == FreeBSD ]] 00:00:50.452 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.452 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.452 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:50.452 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:50.452 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:50.452 + [[ -x /usr/src/fio-static/fio ]] 00:00:50.452 + export FIO_BIN=/usr/src/fio-static/fio 00:00:50.452 + FIO_BIN=/usr/src/fio-static/fio 00:00:50.452 + sudo dmesg -Tw 00:00:50.452 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:50.452 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:50.452 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:50.452 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.452 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.452 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:50.452 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.452 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.452 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:50.452 Test configuration: 00:00:50.452 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.452 SPDK_TEST_NVMF=1 00:00:50.452 SPDK_TEST_NVME_CLI=1 00:00:50.452 SPDK_TEST_NVMF_NICS=mlx5 00:00:50.452 SPDK_RUN_ASAN=1 00:00:50.452 SPDK_RUN_UBSAN=1 00:00:50.452 NET_TYPE=phy 00:00:50.452 RUN_NIGHTLY=1 06:50:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:50.452 06:50:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:50.452 06:50:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:50.452 06:50:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:50.452 06:50:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.452 06:50:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.452 06:50:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.452 06:50:05 -- paths/export.sh@5 -- $ export PATH 00:00:50.452 06:50:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.452 06:50:05 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:50.452 06:50:05 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:50.452 06:50:05 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721796605.XXXXXX 00:00:50.452 06:50:05 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721796605.aJIQ97 00:00:50.452 06:50:05 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:50.452 06:50:05 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:50.452 06:50:05 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:00:50.452 06:50:05 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:50.452 06:50:05 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:50.452 06:50:05 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:50.452 06:50:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:50.452 06:50:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.452 06:50:05 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:50.452 06:50:05 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:50.452 06:50:05 -- pm/common@17 -- $ local monitor 00:00:50.452 06:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.452 06:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.452 06:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.452 06:50:05 -- pm/common@21 -- $ date +%s 00:00:50.452 06:50:05 -- pm/common@21 -- $ date +%s 00:00:50.452 06:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.452 06:50:05 -- pm/common@25 -- $ sleep 1 00:00:50.452 06:50:05 -- pm/common@21 -- $ date +%s 00:00:50.452 06:50:05 -- pm/common@21 -- $ date +%s 00:00:50.452 06:50:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721796605 00:00:50.452 06:50:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721796605 00:00:50.452 06:50:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721796605 00:00:50.452 06:50:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721796605 00:00:50.712 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721796605_collect-vmstat.pm.log 00:00:50.712 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721796605_collect-cpu-load.pm.log 00:00:50.712 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721796605_collect-cpu-temp.pm.log 00:00:50.712 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721796605_collect-bmc-pm.bmc.pm.log 00:00:51.650 06:50:06 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:51.650 06:50:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:51.650 06:50:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:51.650 06:50:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:51.650 06:50:06 -- spdk/autobuild.sh@16 -- $ date -u 00:00:51.650 Wed Jul 24 04:50:06 AM UTC 2024 00:00:51.650 06:50:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:51.650 v24.09-pre-309-g78cbcfdde 00:00:51.650 06:50:06 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:51.650 06:50:06 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:51.650 06:50:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:51.650 06:50:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:51.650 06:50:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.650 ************************************ 00:00:51.650 START TEST asan 00:00:51.650 ************************************ 00:00:51.650 06:50:06 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:00:51.650 using asan 00:00:51.650 00:00:51.650 real 0m0.001s 00:00:51.650 user 0m0.000s 00:00:51.650 sys 0m0.000s 00:00:51.650 06:50:06 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:51.650 06:50:06 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:51.650 ************************************ 00:00:51.650 END TEST asan 00:00:51.650 ************************************ 00:00:51.650 06:50:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:51.650 06:50:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:51.650 06:50:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:51.650 06:50:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:51.650 06:50:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.650 ************************************ 00:00:51.650 START TEST ubsan 00:00:51.650 ************************************ 00:00:51.650 06:50:06 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:51.650 using ubsan 00:00:51.650 00:00:51.650 real 0m0.000s 00:00:51.650 user 0m0.000s 00:00:51.650 sys 0m0.000s 00:00:51.650 06:50:06 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:51.650 06:50:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:51.650 ************************************ 00:00:51.650 END TEST ubsan 00:00:51.650 ************************************ 00:00:51.650 06:50:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:51.650 06:50:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:51.650 06:50:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:51.650 06:50:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:51.650 06:50:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:51.650 06:50:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:51.650 06:50:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:51.650 06:50:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:51.650 06:50:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:51.909 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:00:51.910 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:00:52.168 Using 'verbs' RDMA provider 00:01:05.319 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:20.212 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:20.212 Creating mk/config.mk...done. 00:01:20.212 Creating mk/cc.flags.mk...done. 00:01:20.212 Type 'make' to build. 00:01:20.212 06:50:33 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:20.212 06:50:33 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:20.212 06:50:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:20.212 06:50:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.212 ************************************ 00:01:20.212 START TEST make 00:01:20.212 ************************************ 00:01:20.212 06:50:33 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:20.212 make[1]: Nothing to be done for 'all'. 00:01:28.333 The Meson build system 00:01:28.333 Version: 1.3.1 00:01:28.333 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:28.333 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:28.333 Build type: native build 00:01:28.333 Program cat found: YES (/usr/bin/cat) 00:01:28.333 Project name: DPDK 00:01:28.333 Project version: 24.03.0 00:01:28.333 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:28.333 C linker for the host machine: cc ld.bfd 2.39-16 00:01:28.333 Host machine cpu family: x86_64 00:01:28.333 Host machine cpu: x86_64 00:01:28.333 Message: ## Building in Developer Mode ## 00:01:28.333 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:28.333 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:28.333 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:28.333 Program python3 found: YES (/usr/bin/python3) 00:01:28.333 Program cat found: YES (/usr/bin/cat) 00:01:28.333 Compiler for C supports arguments -march=native: YES 00:01:28.333 Checking for size of "void *" : 8 00:01:28.333 Checking for size of "void *" : 8 (cached) 00:01:28.333 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:28.333 Library m found: YES 00:01:28.333 Library numa found: YES 00:01:28.333 Has header "numaif.h" : YES 00:01:28.333 Library fdt found: NO 00:01:28.333 Library execinfo found: NO 00:01:28.333 Has header "execinfo.h" : YES 00:01:28.333 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:28.333 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:28.333 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:28.333 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:28.333 Run-time dependency openssl found: YES 3.0.9 00:01:28.333 Run-time dependency libpcap found: YES 1.10.4 00:01:28.333 Has header "pcap.h" with dependency libpcap: YES 00:01:28.333 Compiler for C supports arguments -Wcast-qual: YES 00:01:28.333 Compiler for C supports arguments -Wdeprecated: YES 00:01:28.333 Compiler for C supports arguments -Wformat: YES 00:01:28.333 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:28.333 Compiler for C supports arguments -Wformat-security: NO 00:01:28.333 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.333 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:28.333 Compiler for C supports arguments -Wnested-externs: YES 00:01:28.334 Compiler for C supports arguments -Wold-style-definition: YES 00:01:28.334 Compiler for C supports arguments -Wpointer-arith: YES 00:01:28.334 Compiler for C supports arguments -Wsign-compare: YES 00:01:28.334 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:28.334 Compiler for C supports arguments -Wundef: YES 00:01:28.334 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.334 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:28.334 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:28.334 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.334 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:28.334 Program objdump found: YES (/usr/bin/objdump) 00:01:28.334 Compiler for C supports arguments -mavx512f: YES 00:01:28.334 Checking if "AVX512 checking" compiles: YES 00:01:28.334 Fetching value of define "__SSE4_2__" : 1 00:01:28.334 Fetching value of define "__AES__" : 1 00:01:28.334 Fetching value of define "__AVX__" : 1 00:01:28.334 Fetching value of define "__AVX2__" : 1 00:01:28.334 Fetching value of define "__AVX512BW__" : 1 00:01:28.334 Fetching value of define "__AVX512CD__" : 1 00:01:28.334 Fetching value of define "__AVX512DQ__" : 1 00:01:28.334 Fetching value of define "__AVX512F__" : 1 00:01:28.334 Fetching value of define "__AVX512VL__" : 1 00:01:28.334 Fetching value of define "__PCLMUL__" : 1 00:01:28.334 Fetching value of define "__RDRND__" : 1 00:01:28.334 Fetching value of define "__RDSEED__" : 1 00:01:28.334 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:28.334 Fetching value of define "__znver1__" : (undefined) 00:01:28.334 Fetching value of define "__znver2__" : (undefined) 00:01:28.334 Fetching value of define "__znver3__" : (undefined) 00:01:28.334 Fetching value of define "__znver4__" : (undefined) 00:01:28.334 Library asan found: YES 00:01:28.334 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:28.334 Message: lib/log: Defining dependency "log" 00:01:28.334 Message: lib/kvargs: Defining dependency "kvargs" 00:01:28.334 Message: lib/telemetry: Defining dependency "telemetry" 00:01:28.334 Library rt found: YES 00:01:28.334 Checking for function "getentropy" : NO 00:01:28.334 Message: lib/eal: Defining dependency "eal" 00:01:28.334 Message: lib/ring: Defining dependency "ring" 00:01:28.334 Message: lib/rcu: Defining dependency "rcu" 00:01:28.334 Message: lib/mempool: Defining dependency "mempool" 00:01:28.334 Message: lib/mbuf: Defining dependency "mbuf" 00:01:28.334 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:28.334 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.334 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:28.334 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:28.334 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:28.334 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:28.334 Compiler for C supports arguments -mpclmul: YES 00:01:28.334 Compiler for C supports arguments -maes: YES 00:01:28.334 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:28.334 Compiler for C supports arguments -mavx512bw: YES 00:01:28.334 Compiler for C supports arguments -mavx512dq: YES 00:01:28.334 Compiler for C supports arguments -mavx512vl: YES 00:01:28.334 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:28.334 Compiler for C supports arguments -mavx2: YES 00:01:28.334 Compiler for C supports arguments -mavx: YES 00:01:28.334 Message: lib/net: Defining dependency "net" 00:01:28.334 Message: lib/meter: Defining dependency "meter" 00:01:28.334 Message: lib/ethdev: Defining dependency "ethdev" 00:01:28.334 Message: lib/pci: Defining dependency "pci" 00:01:28.334 Message: lib/cmdline: Defining dependency "cmdline" 00:01:28.334 Message: lib/hash: Defining dependency "hash" 00:01:28.334 Message: lib/timer: Defining dependency "timer" 00:01:28.334 Message: lib/compressdev: Defining dependency "compressdev" 00:01:28.334 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:28.334 Message: lib/dmadev: Defining dependency "dmadev" 00:01:28.334 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:28.334 Message: lib/power: Defining dependency "power" 00:01:28.334 Message: lib/reorder: Defining dependency "reorder" 00:01:28.334 Message: lib/security: Defining dependency "security" 00:01:28.334 Has header "linux/userfaultfd.h" : YES 00:01:28.334 Has header "linux/vduse.h" : YES 00:01:28.334 Message: lib/vhost: Defining dependency "vhost" 00:01:28.334 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:28.334 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:28.334 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:28.334 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:28.334 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:28.334 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:28.334 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:28.334 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:28.334 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:28.334 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:28.334 Program doxygen found: YES (/usr/bin/doxygen) 00:01:28.334 Configuring doxy-api-html.conf using configuration 00:01:28.334 Configuring doxy-api-man.conf using configuration 00:01:28.334 Program mandb found: YES (/usr/bin/mandb) 00:01:28.334 Program sphinx-build found: NO 00:01:28.334 Configuring rte_build_config.h using configuration 00:01:28.334 Message: 00:01:28.334 ================= 00:01:28.334 Applications Enabled 00:01:28.334 ================= 00:01:28.334 00:01:28.334 apps: 00:01:28.334 00:01:28.334 00:01:28.334 Message: 00:01:28.334 ================= 00:01:28.334 Libraries Enabled 00:01:28.334 ================= 00:01:28.334 00:01:28.334 libs: 00:01:28.334 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:28.334 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:28.334 cryptodev, dmadev, power, reorder, security, vhost, 00:01:28.334 00:01:28.334 Message: 00:01:28.334 =============== 00:01:28.334 Drivers Enabled 00:01:28.334 =============== 00:01:28.334 00:01:28.334 common: 00:01:28.334 00:01:28.334 bus: 00:01:28.334 pci, vdev, 00:01:28.334 mempool: 00:01:28.334 ring, 00:01:28.334 dma: 00:01:28.334 00:01:28.334 net: 00:01:28.334 00:01:28.334 crypto: 00:01:28.334 00:01:28.334 compress: 00:01:28.334 00:01:28.334 vdpa: 00:01:28.334 00:01:28.334 00:01:28.334 Message: 00:01:28.334 ================= 00:01:28.334 Content Skipped 00:01:28.334 ================= 00:01:28.334 00:01:28.334 apps: 00:01:28.334 dumpcap: explicitly disabled via build config 00:01:28.334 graph: explicitly disabled via build config 00:01:28.334 pdump: explicitly disabled via build config 00:01:28.334 proc-info: explicitly disabled via build config 00:01:28.334 test-acl: explicitly disabled via build config 00:01:28.334 test-bbdev: explicitly disabled via build config 00:01:28.334 test-cmdline: explicitly disabled via build config 00:01:28.334 test-compress-perf: explicitly disabled via build config 00:01:28.334 test-crypto-perf: explicitly disabled via build config 00:01:28.334 test-dma-perf: explicitly disabled via build config 00:01:28.334 test-eventdev: explicitly disabled via build config 00:01:28.334 test-fib: explicitly disabled via build config 00:01:28.334 test-flow-perf: explicitly disabled via build config 00:01:28.334 test-gpudev: explicitly disabled via build config 00:01:28.334 test-mldev: explicitly disabled via build config 00:01:28.334 test-pipeline: explicitly disabled via build config 00:01:28.334 test-pmd: explicitly disabled via build config 00:01:28.334 test-regex: explicitly disabled via build config 00:01:28.334 test-sad: explicitly disabled via build config 00:01:28.334 test-security-perf: explicitly disabled via build config 00:01:28.334 00:01:28.334 libs: 00:01:28.334 argparse: explicitly disabled via build config 00:01:28.334 metrics: explicitly disabled via build config 00:01:28.334 acl: explicitly disabled via build config 00:01:28.334 bbdev: explicitly disabled via build config 00:01:28.334 bitratestats: explicitly disabled via build config 00:01:28.334 bpf: explicitly disabled via build config 00:01:28.334 cfgfile: explicitly disabled via build config 00:01:28.334 distributor: explicitly disabled via build config 00:01:28.334 efd: explicitly disabled via build config 00:01:28.334 eventdev: explicitly disabled via build config 00:01:28.334 dispatcher: explicitly disabled via build config 00:01:28.334 gpudev: explicitly disabled via build config 00:01:28.334 gro: explicitly disabled via build config 00:01:28.334 gso: explicitly disabled via build config 00:01:28.334 ip_frag: explicitly disabled via build config 00:01:28.334 jobstats: explicitly disabled via build config 00:01:28.334 latencystats: explicitly disabled via build config 00:01:28.334 lpm: explicitly disabled via build config 00:01:28.334 member: explicitly disabled via build config 00:01:28.334 pcapng: explicitly disabled via build config 00:01:28.334 rawdev: explicitly disabled via build config 00:01:28.334 regexdev: explicitly disabled via build config 00:01:28.334 mldev: explicitly disabled via build config 00:01:28.334 rib: explicitly disabled via build config 00:01:28.334 sched: explicitly disabled via build config 00:01:28.334 stack: explicitly disabled via build config 00:01:28.334 ipsec: explicitly disabled via build config 00:01:28.334 pdcp: explicitly disabled via build config 00:01:28.334 fib: explicitly disabled via build config 00:01:28.334 port: explicitly disabled via build config 00:01:28.334 pdump: explicitly disabled via build config 00:01:28.334 table: explicitly disabled via build config 00:01:28.334 pipeline: explicitly disabled via build config 00:01:28.334 graph: explicitly disabled via build config 00:01:28.334 node: explicitly disabled via build config 00:01:28.335 00:01:28.335 drivers: 00:01:28.335 common/cpt: not in enabled drivers build config 00:01:28.335 common/dpaax: not in enabled drivers build config 00:01:28.335 common/iavf: not in enabled drivers build config 00:01:28.335 common/idpf: not in enabled drivers build config 00:01:28.335 common/ionic: not in enabled drivers build config 00:01:28.335 common/mvep: not in enabled drivers build config 00:01:28.335 common/octeontx: not in enabled drivers build config 00:01:28.335 bus/auxiliary: not in enabled drivers build config 00:01:28.335 bus/cdx: not in enabled drivers build config 00:01:28.335 bus/dpaa: not in enabled drivers build config 00:01:28.335 bus/fslmc: not in enabled drivers build config 00:01:28.335 bus/ifpga: not in enabled drivers build config 00:01:28.335 bus/platform: not in enabled drivers build config 00:01:28.335 bus/uacce: not in enabled drivers build config 00:01:28.335 bus/vmbus: not in enabled drivers build config 00:01:28.335 common/cnxk: not in enabled drivers build config 00:01:28.335 common/mlx5: not in enabled drivers build config 00:01:28.335 common/nfp: not in enabled drivers build config 00:01:28.335 common/nitrox: not in enabled drivers build config 00:01:28.335 common/qat: not in enabled drivers build config 00:01:28.335 common/sfc_efx: not in enabled drivers build config 00:01:28.335 mempool/bucket: not in enabled drivers build config 00:01:28.335 mempool/cnxk: not in enabled drivers build config 00:01:28.335 mempool/dpaa: not in enabled drivers build config 00:01:28.335 mempool/dpaa2: not in enabled drivers build config 00:01:28.335 mempool/octeontx: not in enabled drivers build config 00:01:28.335 mempool/stack: not in enabled drivers build config 00:01:28.335 dma/cnxk: not in enabled drivers build config 00:01:28.335 dma/dpaa: not in enabled drivers build config 00:01:28.335 dma/dpaa2: not in enabled drivers build config 00:01:28.335 dma/hisilicon: not in enabled drivers build config 00:01:28.335 dma/idxd: not in enabled drivers build config 00:01:28.335 dma/ioat: not in enabled drivers build config 00:01:28.335 dma/skeleton: not in enabled drivers build config 00:01:28.335 net/af_packet: not in enabled drivers build config 00:01:28.335 net/af_xdp: not in enabled drivers build config 00:01:28.335 net/ark: not in enabled drivers build config 00:01:28.335 net/atlantic: not in enabled drivers build config 00:01:28.335 net/avp: not in enabled drivers build config 00:01:28.335 net/axgbe: not in enabled drivers build config 00:01:28.335 net/bnx2x: not in enabled drivers build config 00:01:28.335 net/bnxt: not in enabled drivers build config 00:01:28.335 net/bonding: not in enabled drivers build config 00:01:28.335 net/cnxk: not in enabled drivers build config 00:01:28.335 net/cpfl: not in enabled drivers build config 00:01:28.335 net/cxgbe: not in enabled drivers build config 00:01:28.335 net/dpaa: not in enabled drivers build config 00:01:28.335 net/dpaa2: not in enabled drivers build config 00:01:28.335 net/e1000: not in enabled drivers build config 00:01:28.335 net/ena: not in enabled drivers build config 00:01:28.335 net/enetc: not in enabled drivers build config 00:01:28.335 net/enetfec: not in enabled drivers build config 00:01:28.335 net/enic: not in enabled drivers build config 00:01:28.335 net/failsafe: not in enabled drivers build config 00:01:28.335 net/fm10k: not in enabled drivers build config 00:01:28.335 net/gve: not in enabled drivers build config 00:01:28.335 net/hinic: not in enabled drivers build config 00:01:28.335 net/hns3: not in enabled drivers build config 00:01:28.335 net/i40e: not in enabled drivers build config 00:01:28.335 net/iavf: not in enabled drivers build config 00:01:28.335 net/ice: not in enabled drivers build config 00:01:28.335 net/idpf: not in enabled drivers build config 00:01:28.335 net/igc: not in enabled drivers build config 00:01:28.335 net/ionic: not in enabled drivers build config 00:01:28.335 net/ipn3ke: not in enabled drivers build config 00:01:28.335 net/ixgbe: not in enabled drivers build config 00:01:28.335 net/mana: not in enabled drivers build config 00:01:28.335 net/memif: not in enabled drivers build config 00:01:28.335 net/mlx4: not in enabled drivers build config 00:01:28.335 net/mlx5: not in enabled drivers build config 00:01:28.335 net/mvneta: not in enabled drivers build config 00:01:28.335 net/mvpp2: not in enabled drivers build config 00:01:28.335 net/netvsc: not in enabled drivers build config 00:01:28.335 net/nfb: not in enabled drivers build config 00:01:28.335 net/nfp: not in enabled drivers build config 00:01:28.335 net/ngbe: not in enabled drivers build config 00:01:28.335 net/null: not in enabled drivers build config 00:01:28.335 net/octeontx: not in enabled drivers build config 00:01:28.335 net/octeon_ep: not in enabled drivers build config 00:01:28.335 net/pcap: not in enabled drivers build config 00:01:28.335 net/pfe: not in enabled drivers build config 00:01:28.335 net/qede: not in enabled drivers build config 00:01:28.335 net/ring: not in enabled drivers build config 00:01:28.335 net/sfc: not in enabled drivers build config 00:01:28.335 net/softnic: not in enabled drivers build config 00:01:28.335 net/tap: not in enabled drivers build config 00:01:28.335 net/thunderx: not in enabled drivers build config 00:01:28.335 net/txgbe: not in enabled drivers build config 00:01:28.335 net/vdev_netvsc: not in enabled drivers build config 00:01:28.335 net/vhost: not in enabled drivers build config 00:01:28.335 net/virtio: not in enabled drivers build config 00:01:28.335 net/vmxnet3: not in enabled drivers build config 00:01:28.335 raw/*: missing internal dependency, "rawdev" 00:01:28.335 crypto/armv8: not in enabled drivers build config 00:01:28.335 crypto/bcmfs: not in enabled drivers build config 00:01:28.335 crypto/caam_jr: not in enabled drivers build config 00:01:28.335 crypto/ccp: not in enabled drivers build config 00:01:28.335 crypto/cnxk: not in enabled drivers build config 00:01:28.335 crypto/dpaa_sec: not in enabled drivers build config 00:01:28.335 crypto/dpaa2_sec: not in enabled drivers build config 00:01:28.335 crypto/ipsec_mb: not in enabled drivers build config 00:01:28.335 crypto/mlx5: not in enabled drivers build config 00:01:28.335 crypto/mvsam: not in enabled drivers build config 00:01:28.335 crypto/nitrox: not in enabled drivers build config 00:01:28.335 crypto/null: not in enabled drivers build config 00:01:28.335 crypto/octeontx: not in enabled drivers build config 00:01:28.335 crypto/openssl: not in enabled drivers build config 00:01:28.335 crypto/scheduler: not in enabled drivers build config 00:01:28.335 crypto/uadk: not in enabled drivers build config 00:01:28.335 crypto/virtio: not in enabled drivers build config 00:01:28.335 compress/isal: not in enabled drivers build config 00:01:28.335 compress/mlx5: not in enabled drivers build config 00:01:28.335 compress/nitrox: not in enabled drivers build config 00:01:28.335 compress/octeontx: not in enabled drivers build config 00:01:28.335 compress/zlib: not in enabled drivers build config 00:01:28.335 regex/*: missing internal dependency, "regexdev" 00:01:28.335 ml/*: missing internal dependency, "mldev" 00:01:28.335 vdpa/ifc: not in enabled drivers build config 00:01:28.335 vdpa/mlx5: not in enabled drivers build config 00:01:28.335 vdpa/nfp: not in enabled drivers build config 00:01:28.335 vdpa/sfc: not in enabled drivers build config 00:01:28.335 event/*: missing internal dependency, "eventdev" 00:01:28.335 baseband/*: missing internal dependency, "bbdev" 00:01:28.335 gpu/*: missing internal dependency, "gpudev" 00:01:28.335 00:01:28.335 00:01:28.335 Build targets in project: 85 00:01:28.335 00:01:28.335 DPDK 24.03.0 00:01:28.335 00:01:28.335 User defined options 00:01:28.335 buildtype : debug 00:01:28.335 default_library : shared 00:01:28.335 libdir : lib 00:01:28.335 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:28.335 b_sanitize : address 00:01:28.335 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:28.335 c_link_args : 00:01:28.335 cpu_instruction_set: native 00:01:28.335 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:28.335 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:28.335 enable_docs : false 00:01:28.335 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:28.335 enable_kmods : false 00:01:28.335 max_lcores : 128 00:01:28.335 tests : false 00:01:28.335 00:01:28.335 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.606 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:28.606 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:28.606 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:28.869 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:28.869 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:28.869 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:28.869 [6/268] Linking static target lib/librte_kvargs.a 00:01:28.869 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:28.869 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:28.869 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:28.869 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:28.869 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:28.869 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:28.869 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:28.869 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:28.869 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:28.869 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:28.869 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:28.869 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:28.869 [19/268] Linking static target lib/librte_log.a 00:01:28.869 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:29.128 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:29.128 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:29.128 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:29.128 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:29.128 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:29.128 [26/268] Linking static target lib/librte_pci.a 00:01:29.128 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:29.128 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:29.128 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:29.128 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:29.128 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:29.128 [32/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:29.128 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:29.128 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:29.128 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:29.387 [36/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:29.387 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:29.387 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:29.387 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:29.387 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:29.387 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:29.387 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:29.387 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:29.387 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:29.387 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:29.387 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:29.387 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:29.387 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:29.387 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:29.387 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:29.387 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:29.387 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:29.387 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:29.387 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:29.387 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:29.387 [56/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:29.387 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:29.387 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:29.387 [59/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:29.387 [60/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.387 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:29.388 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:29.388 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:29.388 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:29.388 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:29.388 [66/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:29.388 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:29.388 [68/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:29.388 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:29.388 [70/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.388 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:29.388 [72/268] Linking static target lib/librte_meter.a 00:01:29.388 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:29.388 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:29.388 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:29.388 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:29.388 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:29.388 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:29.388 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:29.388 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:29.388 [81/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:29.388 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:29.388 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:29.388 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:29.388 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:29.388 [86/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:29.388 [87/268] Linking static target lib/librte_ring.a 00:01:29.388 [88/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:29.388 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:29.388 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:29.388 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:29.388 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:29.388 [93/268] Linking static target lib/librte_telemetry.a 00:01:29.388 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:29.388 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:29.388 [96/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:29.388 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:29.388 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:29.388 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:29.388 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:29.388 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:29.388 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:29.388 [103/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:29.388 [104/268] Linking static target lib/librte_cmdline.a 00:01:29.388 [105/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:29.388 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:29.647 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:29.647 [108/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:29.647 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:29.647 [110/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:29.647 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:29.647 [112/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:29.647 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:29.647 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:29.647 [115/268] Linking static target lib/librte_timer.a 00:01:29.647 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:29.647 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:29.647 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:29.647 [119/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:29.647 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:29.647 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:29.647 [122/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:29.647 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:29.647 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:29.647 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:29.647 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:29.647 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:29.647 [128/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:29.647 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:29.647 [130/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:29.647 [131/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:29.647 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:29.647 [133/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:29.647 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:29.647 [135/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:29.647 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:29.647 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:29.647 [138/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:29.647 [139/268] Linking static target lib/librte_mempool.a 00:01:29.647 [140/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:29.647 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:29.647 [142/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.647 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:29.647 [144/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:29.647 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:29.647 [146/268] Linking static target lib/librte_rcu.a 00:01:29.647 [147/268] Linking static target lib/librte_dmadev.a 00:01:29.647 [148/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:29.647 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:29.647 [150/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.647 [151/268] Linking static target lib/librte_net.a 00:01:29.647 [152/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:29.647 [153/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:29.647 [154/268] Linking target lib/librte_log.so.24.1 00:01:29.647 [155/268] Linking static target lib/librte_eal.a 00:01:29.647 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:29.905 [157/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:29.905 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:29.905 [159/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.905 [160/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:29.905 [161/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:29.905 [162/268] Linking static target lib/librte_power.a 00:01:29.905 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:29.905 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:29.905 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:29.905 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:29.905 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:29.905 [168/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:29.905 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:29.905 [170/268] Linking static target lib/librte_compressdev.a 00:01:29.905 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:29.905 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:29.905 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:29.905 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:29.905 [175/268] Linking target lib/librte_kvargs.so.24.1 00:01:29.905 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:29.905 [177/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:29.905 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:29.905 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:29.905 [180/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:29.905 [181/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:29.905 [182/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.905 [183/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.905 [184/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:29.905 [185/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:29.905 [186/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:29.905 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:30.165 [188/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:30.165 [189/268] Linking static target lib/librte_security.a 00:01:30.165 [190/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:30.165 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:30.165 [192/268] Linking static target lib/librte_reorder.a 00:01:30.165 [193/268] Linking static target drivers/librte_bus_vdev.a 00:01:30.165 [194/268] Linking target lib/librte_telemetry.so.24.1 00:01:30.165 [195/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.165 [196/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.165 [197/268] Linking static target lib/librte_mbuf.a 00:01:30.165 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:30.165 [199/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:30.165 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:30.165 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:30.165 [202/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:30.165 [203/268] Linking static target lib/librte_hash.a 00:01:30.165 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:30.165 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:30.165 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:30.165 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:30.165 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:30.165 [209/268] Linking static target drivers/librte_bus_pci.a 00:01:30.165 [210/268] Linking static target drivers/librte_mempool_ring.a 00:01:30.424 [211/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.424 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.424 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:30.424 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.683 [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:30.683 [216/268] Linking static target lib/librte_cryptodev.a 00:01:30.683 [217/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.683 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.683 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.683 [220/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.683 [221/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.942 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.202 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.202 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:31.202 [225/268] Linking static target lib/librte_ethdev.a 00:01:31.202 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.138 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:32.772 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.674 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:34.674 [230/268] Linking static target lib/librte_vhost.a 00:01:37.214 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.505 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.413 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.413 [234/268] Linking target lib/librte_eal.so.24.1 00:01:42.672 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:42.672 [236/268] Linking target lib/librte_pci.so.24.1 00:01:42.672 [237/268] Linking target lib/librte_timer.so.24.1 00:01:42.672 [238/268] Linking target lib/librte_ring.so.24.1 00:01:42.672 [239/268] Linking target lib/librte_dmadev.so.24.1 00:01:42.672 [240/268] Linking target lib/librte_meter.so.24.1 00:01:42.672 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:42.931 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:42.931 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:42.931 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:42.931 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:42.931 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:42.931 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:42.931 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:42.931 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:42.931 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:42.931 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:43.189 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:43.189 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:43.189 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:43.189 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:43.189 [256/268] Linking target lib/librte_net.so.24.1 00:01:43.189 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:43.189 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:43.448 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:43.448 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:43.448 [261/268] Linking target lib/librte_security.so.24.1 00:01:43.448 [262/268] Linking target lib/librte_hash.so.24.1 00:01:43.448 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:43.448 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:43.708 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:43.708 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:43.708 [267/268] Linking target lib/librte_power.so.24.1 00:01:43.708 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:43.708 INFO: autodetecting backend as ninja 00:01:43.708 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:45.089 CC lib/log/log.o 00:01:45.089 CC lib/log/log_flags.o 00:01:45.089 CC lib/log/log_deprecated.o 00:01:45.089 CC lib/ut_mock/mock.o 00:01:45.089 CC lib/ut/ut.o 00:01:45.089 LIB libspdk_log.a 00:01:45.089 LIB libspdk_ut_mock.a 00:01:45.089 SO libspdk_ut_mock.so.6.0 00:01:45.089 SO libspdk_log.so.7.0 00:01:45.089 LIB libspdk_ut.a 00:01:45.089 SO libspdk_ut.so.2.0 00:01:45.089 SYMLINK libspdk_ut_mock.so 00:01:45.089 SYMLINK libspdk_log.so 00:01:45.089 SYMLINK libspdk_ut.so 00:01:45.348 CC lib/util/base64.o 00:01:45.348 CXX lib/trace_parser/trace.o 00:01:45.348 CC lib/util/crc16.o 00:01:45.348 CC lib/util/bit_array.o 00:01:45.348 CC lib/util/cpuset.o 00:01:45.348 CC lib/util/crc32.o 00:01:45.348 CC lib/util/crc32_ieee.o 00:01:45.348 CC lib/util/crc32c.o 00:01:45.348 CC lib/util/crc64.o 00:01:45.348 CC lib/util/dif.o 00:01:45.348 CC lib/util/fd.o 00:01:45.348 CC lib/util/fd_group.o 00:01:45.348 CC lib/util/iov.o 00:01:45.348 CC lib/util/file.o 00:01:45.348 CC lib/util/hexlify.o 00:01:45.348 CC lib/util/math.o 00:01:45.348 CC lib/util/net.o 00:01:45.348 CC lib/util/pipe.o 00:01:45.348 CC lib/util/strerror_tls.o 00:01:45.348 CC lib/util/string.o 00:01:45.348 CC lib/util/uuid.o 00:01:45.348 CC lib/util/xor.o 00:01:45.348 CC lib/util/zipf.o 00:01:45.348 CC lib/ioat/ioat.o 00:01:45.348 CC lib/dma/dma.o 00:01:45.608 CC lib/vfio_user/host/vfio_user_pci.o 00:01:45.608 CC lib/vfio_user/host/vfio_user.o 00:01:45.608 LIB libspdk_dma.a 00:01:45.608 SO libspdk_dma.so.4.0 00:01:45.868 LIB libspdk_ioat.a 00:01:45.868 SYMLINK libspdk_dma.so 00:01:45.868 SO libspdk_ioat.so.7.0 00:01:45.868 LIB libspdk_vfio_user.a 00:01:45.868 SYMLINK libspdk_ioat.so 00:01:45.868 SO libspdk_vfio_user.so.5.0 00:01:45.868 SYMLINK libspdk_vfio_user.so 00:01:45.868 LIB libspdk_util.a 00:01:46.127 SO libspdk_util.so.10.0 00:01:46.127 SYMLINK libspdk_util.so 00:01:46.127 LIB libspdk_trace_parser.a 00:01:46.386 SO libspdk_trace_parser.so.5.0 00:01:46.386 SYMLINK libspdk_trace_parser.so 00:01:46.645 CC lib/json/json_parse.o 00:01:46.645 CC lib/json/json_util.o 00:01:46.645 CC lib/rdma_utils/rdma_utils.o 00:01:46.645 CC lib/json/json_write.o 00:01:46.645 CC lib/env_dpdk/env.o 00:01:46.645 CC lib/env_dpdk/memory.o 00:01:46.645 CC lib/conf/conf.o 00:01:46.645 CC lib/env_dpdk/init.o 00:01:46.645 CC lib/env_dpdk/pci.o 00:01:46.645 CC lib/env_dpdk/threads.o 00:01:46.645 CC lib/env_dpdk/pci_ioat.o 00:01:46.645 CC lib/idxd/idxd_user.o 00:01:46.645 CC lib/rdma_provider/common.o 00:01:46.645 CC lib/idxd/idxd.o 00:01:46.645 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:46.645 CC lib/env_dpdk/pci_virtio.o 00:01:46.645 CC lib/env_dpdk/pci_vmd.o 00:01:46.645 CC lib/idxd/idxd_kernel.o 00:01:46.645 CC lib/env_dpdk/pci_idxd.o 00:01:46.645 CC lib/env_dpdk/pci_event.o 00:01:46.645 CC lib/env_dpdk/sigbus_handler.o 00:01:46.645 CC lib/env_dpdk/pci_dpdk.o 00:01:46.645 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:46.645 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:46.645 CC lib/vmd/led.o 00:01:46.645 CC lib/vmd/vmd.o 00:01:46.645 LIB libspdk_rdma_provider.a 00:01:46.904 SO libspdk_rdma_provider.so.6.0 00:01:46.904 LIB libspdk_conf.a 00:01:46.904 LIB libspdk_rdma_utils.a 00:01:46.904 SO libspdk_conf.so.6.0 00:01:46.904 SYMLINK libspdk_rdma_provider.so 00:01:46.904 SO libspdk_rdma_utils.so.1.0 00:01:46.904 LIB libspdk_json.a 00:01:46.904 SYMLINK libspdk_conf.so 00:01:46.904 SO libspdk_json.so.6.0 00:01:46.904 SYMLINK libspdk_rdma_utils.so 00:01:46.904 SYMLINK libspdk_json.so 00:01:47.163 LIB libspdk_idxd.a 00:01:47.163 SO libspdk_idxd.so.12.0 00:01:47.163 LIB libspdk_vmd.a 00:01:47.163 SYMLINK libspdk_idxd.so 00:01:47.163 SO libspdk_vmd.so.6.0 00:01:47.422 SYMLINK libspdk_vmd.so 00:01:47.422 CC lib/jsonrpc/jsonrpc_server.o 00:01:47.422 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:47.422 CC lib/jsonrpc/jsonrpc_client.o 00:01:47.422 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:47.682 LIB libspdk_jsonrpc.a 00:01:47.682 SO libspdk_jsonrpc.so.6.0 00:01:47.682 SYMLINK libspdk_jsonrpc.so 00:01:47.942 LIB libspdk_env_dpdk.a 00:01:47.942 SO libspdk_env_dpdk.so.15.0 00:01:47.942 SYMLINK libspdk_env_dpdk.so 00:01:48.201 CC lib/rpc/rpc.o 00:01:48.201 LIB libspdk_rpc.a 00:01:48.201 SO libspdk_rpc.so.6.0 00:01:48.461 SYMLINK libspdk_rpc.so 00:01:48.721 CC lib/trace/trace.o 00:01:48.721 CC lib/trace/trace_flags.o 00:01:48.721 CC lib/notify/notify.o 00:01:48.721 CC lib/trace/trace_rpc.o 00:01:48.721 CC lib/notify/notify_rpc.o 00:01:48.721 CC lib/keyring/keyring.o 00:01:48.721 CC lib/keyring/keyring_rpc.o 00:01:48.978 LIB libspdk_notify.a 00:01:48.978 SO libspdk_notify.so.6.0 00:01:48.978 LIB libspdk_trace.a 00:01:48.978 LIB libspdk_keyring.a 00:01:48.978 SYMLINK libspdk_notify.so 00:01:48.978 SO libspdk_keyring.so.1.0 00:01:48.978 SO libspdk_trace.so.10.0 00:01:48.978 SYMLINK libspdk_keyring.so 00:01:48.978 SYMLINK libspdk_trace.so 00:01:49.582 CC lib/sock/sock.o 00:01:49.582 CC lib/sock/sock_rpc.o 00:01:49.582 CC lib/thread/thread.o 00:01:49.582 CC lib/thread/iobuf.o 00:01:49.860 LIB libspdk_sock.a 00:01:49.860 SO libspdk_sock.so.10.0 00:01:49.860 SYMLINK libspdk_sock.so 00:01:50.428 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:50.428 CC lib/nvme/nvme_ctrlr.o 00:01:50.428 CC lib/nvme/nvme_fabric.o 00:01:50.428 CC lib/nvme/nvme_ns_cmd.o 00:01:50.428 CC lib/nvme/nvme_ns.o 00:01:50.428 CC lib/nvme/nvme_pcie_common.o 00:01:50.428 CC lib/nvme/nvme_pcie.o 00:01:50.428 CC lib/nvme/nvme_transport.o 00:01:50.428 CC lib/nvme/nvme.o 00:01:50.428 CC lib/nvme/nvme_qpair.o 00:01:50.428 CC lib/nvme/nvme_quirks.o 00:01:50.428 CC lib/nvme/nvme_discovery.o 00:01:50.428 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:50.428 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:50.428 CC lib/nvme/nvme_tcp.o 00:01:50.428 CC lib/nvme/nvme_opal.o 00:01:50.428 CC lib/nvme/nvme_poll_group.o 00:01:50.428 CC lib/nvme/nvme_io_msg.o 00:01:50.428 CC lib/nvme/nvme_zns.o 00:01:50.428 CC lib/nvme/nvme_stubs.o 00:01:50.428 CC lib/nvme/nvme_auth.o 00:01:50.428 CC lib/nvme/nvme_cuse.o 00:01:50.428 CC lib/nvme/nvme_rdma.o 00:01:50.687 LIB libspdk_thread.a 00:01:50.945 SO libspdk_thread.so.10.1 00:01:50.945 SYMLINK libspdk_thread.so 00:01:51.204 CC lib/accel/accel_sw.o 00:01:51.204 CC lib/accel/accel.o 00:01:51.204 CC lib/accel/accel_rpc.o 00:01:51.204 CC lib/init/json_config.o 00:01:51.204 CC lib/init/subsystem.o 00:01:51.204 CC lib/init/subsystem_rpc.o 00:01:51.204 CC lib/init/rpc.o 00:01:51.204 CC lib/virtio/virtio.o 00:01:51.204 CC lib/virtio/virtio_vhost_user.o 00:01:51.204 CC lib/virtio/virtio_vfio_user.o 00:01:51.204 CC lib/virtio/virtio_pci.o 00:01:51.204 CC lib/blob/blobstore.o 00:01:51.204 CC lib/blob/blob_bs_dev.o 00:01:51.204 CC lib/blob/request.o 00:01:51.204 CC lib/blob/zeroes.o 00:01:51.462 LIB libspdk_init.a 00:01:51.462 SO libspdk_init.so.5.0 00:01:51.720 LIB libspdk_virtio.a 00:01:51.720 SYMLINK libspdk_init.so 00:01:51.720 SO libspdk_virtio.so.7.0 00:01:51.720 SYMLINK libspdk_virtio.so 00:01:51.978 CC lib/event/reactor.o 00:01:51.979 CC lib/event/app.o 00:01:51.979 CC lib/event/log_rpc.o 00:01:51.979 CC lib/event/app_rpc.o 00:01:51.979 CC lib/event/scheduler_static.o 00:01:52.237 LIB libspdk_accel.a 00:01:52.237 LIB libspdk_nvme.a 00:01:52.237 SO libspdk_accel.so.16.0 00:01:52.237 SYMLINK libspdk_accel.so 00:01:52.237 SO libspdk_nvme.so.13.1 00:01:52.495 LIB libspdk_event.a 00:01:52.495 SO libspdk_event.so.14.0 00:01:52.495 SYMLINK libspdk_event.so 00:01:52.754 SYMLINK libspdk_nvme.so 00:01:52.754 CC lib/bdev/bdev.o 00:01:52.754 CC lib/bdev/bdev_rpc.o 00:01:52.754 CC lib/bdev/bdev_zone.o 00:01:52.754 CC lib/bdev/part.o 00:01:52.754 CC lib/bdev/scsi_nvme.o 00:01:54.130 LIB libspdk_blob.a 00:01:54.130 SO libspdk_blob.so.11.0 00:01:54.388 SYMLINK libspdk_blob.so 00:01:54.647 CC lib/blobfs/blobfs.o 00:01:54.647 CC lib/blobfs/tree.o 00:01:54.647 CC lib/lvol/lvol.o 00:01:54.906 LIB libspdk_bdev.a 00:01:55.166 SO libspdk_bdev.so.16.0 00:01:55.166 SYMLINK libspdk_bdev.so 00:01:55.425 LIB libspdk_blobfs.a 00:01:55.425 SO libspdk_blobfs.so.10.0 00:01:55.425 CC lib/nvmf/ctrlr.o 00:01:55.425 CC lib/nvmf/ctrlr_discovery.o 00:01:55.425 CC lib/scsi/dev.o 00:01:55.425 CC lib/nvmf/ctrlr_bdev.o 00:01:55.425 CC lib/scsi/lun.o 00:01:55.425 CC lib/scsi/scsi_bdev.o 00:01:55.425 CC lib/scsi/port.o 00:01:55.425 CC lib/nvmf/subsystem.o 00:01:55.425 CC lib/scsi/scsi.o 00:01:55.425 CC lib/nvmf/nvmf.o 00:01:55.425 CC lib/ublk/ublk.o 00:01:55.425 CC lib/nvmf/nvmf_rpc.o 00:01:55.425 CC lib/scsi/scsi_rpc.o 00:01:55.425 CC lib/ublk/ublk_rpc.o 00:01:55.425 CC lib/scsi/scsi_pr.o 00:01:55.425 CC lib/nvmf/stubs.o 00:01:55.425 CC lib/nvmf/transport.o 00:01:55.425 CC lib/scsi/task.o 00:01:55.425 CC lib/nvmf/tcp.o 00:01:55.425 LIB libspdk_lvol.a 00:01:55.425 CC lib/nvmf/mdns_server.o 00:01:55.425 CC lib/nbd/nbd.o 00:01:55.425 CC lib/nvmf/rdma.o 00:01:55.425 CC lib/nbd/nbd_rpc.o 00:01:55.425 CC lib/nvmf/auth.o 00:01:55.425 SYMLINK libspdk_blobfs.so 00:01:55.425 CC lib/ftl/ftl_init.o 00:01:55.425 CC lib/ftl/ftl_core.o 00:01:55.425 CC lib/ftl/ftl_sb.o 00:01:55.425 CC lib/ftl/ftl_layout.o 00:01:55.425 CC lib/ftl/ftl_debug.o 00:01:55.425 CC lib/ftl/ftl_io.o 00:01:55.425 CC lib/ftl/ftl_l2p.o 00:01:55.425 CC lib/ftl/ftl_l2p_flat.o 00:01:55.425 CC lib/ftl/ftl_nv_cache.o 00:01:55.425 CC lib/ftl/ftl_band.o 00:01:55.425 CC lib/ftl/ftl_reloc.o 00:01:55.425 CC lib/ftl/ftl_band_ops.o 00:01:55.425 CC lib/ftl/ftl_writer.o 00:01:55.425 CC lib/ftl/ftl_rq.o 00:01:55.684 CC lib/ftl/ftl_l2p_cache.o 00:01:55.684 CC lib/ftl/ftl_p2l.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:55.684 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:55.684 CC lib/ftl/utils/ftl_md.o 00:01:55.684 CC lib/ftl/utils/ftl_conf.o 00:01:55.684 SO libspdk_lvol.so.10.0 00:01:55.684 CC lib/ftl/utils/ftl_mempool.o 00:01:55.684 CC lib/ftl/utils/ftl_bitmap.o 00:01:55.684 CC lib/ftl/utils/ftl_property.o 00:01:55.684 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:55.684 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:55.684 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:55.684 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:55.684 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:55.684 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:55.684 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:55.684 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:55.684 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:55.684 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:55.684 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:55.684 CC lib/ftl/base/ftl_base_dev.o 00:01:55.684 CC lib/ftl/base/ftl_base_bdev.o 00:01:55.684 CC lib/ftl/ftl_trace.o 00:01:55.684 SYMLINK libspdk_lvol.so 00:01:55.942 LIB libspdk_nbd.a 00:01:56.200 SO libspdk_nbd.so.7.0 00:01:56.200 SYMLINK libspdk_nbd.so 00:01:56.200 LIB libspdk_scsi.a 00:01:56.200 LIB libspdk_ublk.a 00:01:56.458 SO libspdk_scsi.so.9.0 00:01:56.458 SO libspdk_ublk.so.3.0 00:01:56.458 SYMLINK libspdk_ublk.so 00:01:56.458 SYMLINK libspdk_scsi.so 00:01:56.717 LIB libspdk_ftl.a 00:01:56.717 CC lib/vhost/vhost_scsi.o 00:01:56.717 CC lib/vhost/vhost.o 00:01:56.717 CC lib/vhost/vhost_rpc.o 00:01:56.717 CC lib/iscsi/conn.o 00:01:56.717 CC lib/vhost/vhost_blk.o 00:01:56.717 CC lib/iscsi/init_grp.o 00:01:56.717 CC lib/vhost/rte_vhost_user.o 00:01:56.717 CC lib/iscsi/iscsi.o 00:01:56.717 CC lib/iscsi/md5.o 00:01:56.717 CC lib/iscsi/param.o 00:01:56.717 CC lib/iscsi/portal_grp.o 00:01:56.717 CC lib/iscsi/tgt_node.o 00:01:56.717 CC lib/iscsi/iscsi_subsystem.o 00:01:56.717 CC lib/iscsi/iscsi_rpc.o 00:01:56.717 CC lib/iscsi/task.o 00:01:56.717 SO libspdk_ftl.so.9.0 00:01:57.283 SYMLINK libspdk_ftl.so 00:01:57.541 LIB libspdk_nvmf.a 00:01:57.800 SO libspdk_nvmf.so.19.0 00:01:57.800 LIB libspdk_vhost.a 00:01:57.800 SO libspdk_vhost.so.8.0 00:01:57.800 SYMLINK libspdk_vhost.so 00:01:57.800 SYMLINK libspdk_nvmf.so 00:01:58.058 LIB libspdk_iscsi.a 00:01:58.058 SO libspdk_iscsi.so.8.0 00:01:58.317 SYMLINK libspdk_iscsi.so 00:01:58.884 CC module/env_dpdk/env_dpdk_rpc.o 00:01:59.141 CC module/accel/dsa/accel_dsa.o 00:01:59.141 CC module/accel/dsa/accel_dsa_rpc.o 00:01:59.141 LIB libspdk_env_dpdk_rpc.a 00:01:59.141 CC module/accel/ioat/accel_ioat.o 00:01:59.141 CC module/accel/ioat/accel_ioat_rpc.o 00:01:59.141 CC module/blob/bdev/blob_bdev.o 00:01:59.141 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:59.141 CC module/scheduler/gscheduler/gscheduler.o 00:01:59.141 CC module/accel/iaa/accel_iaa.o 00:01:59.141 CC module/accel/iaa/accel_iaa_rpc.o 00:01:59.141 CC module/accel/error/accel_error.o 00:01:59.141 CC module/accel/error/accel_error_rpc.o 00:01:59.141 CC module/keyring/linux/keyring.o 00:01:59.141 CC module/sock/posix/posix.o 00:01:59.141 CC module/keyring/linux/keyring_rpc.o 00:01:59.141 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:59.141 SO libspdk_env_dpdk_rpc.so.6.0 00:01:59.141 CC module/keyring/file/keyring.o 00:01:59.141 CC module/keyring/file/keyring_rpc.o 00:01:59.141 SYMLINK libspdk_env_dpdk_rpc.so 00:01:59.141 LIB libspdk_keyring_linux.a 00:01:59.141 LIB libspdk_keyring_file.a 00:01:59.141 LIB libspdk_scheduler_gscheduler.a 00:01:59.141 LIB libspdk_scheduler_dpdk_governor.a 00:01:59.141 LIB libspdk_accel_ioat.a 00:01:59.141 LIB libspdk_accel_error.a 00:01:59.141 SO libspdk_keyring_file.so.1.0 00:01:59.141 LIB libspdk_accel_iaa.a 00:01:59.141 SO libspdk_scheduler_gscheduler.so.4.0 00:01:59.141 SO libspdk_keyring_linux.so.1.0 00:01:59.141 LIB libspdk_scheduler_dynamic.a 00:01:59.141 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:59.400 LIB libspdk_accel_dsa.a 00:01:59.400 SO libspdk_accel_ioat.so.6.0 00:01:59.400 LIB libspdk_blob_bdev.a 00:01:59.400 SO libspdk_accel_error.so.2.0 00:01:59.400 SO libspdk_accel_iaa.so.3.0 00:01:59.400 SO libspdk_scheduler_dynamic.so.4.0 00:01:59.400 SO libspdk_accel_dsa.so.5.0 00:01:59.400 SYMLINK libspdk_scheduler_gscheduler.so 00:01:59.400 SYMLINK libspdk_keyring_file.so 00:01:59.400 SYMLINK libspdk_keyring_linux.so 00:01:59.400 SO libspdk_blob_bdev.so.11.0 00:01:59.400 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:59.400 SYMLINK libspdk_accel_ioat.so 00:01:59.400 SYMLINK libspdk_accel_error.so 00:01:59.400 SYMLINK libspdk_accel_iaa.so 00:01:59.400 SYMLINK libspdk_scheduler_dynamic.so 00:01:59.400 SYMLINK libspdk_accel_dsa.so 00:01:59.400 SYMLINK libspdk_blob_bdev.so 00:01:59.658 LIB libspdk_sock_posix.a 00:01:59.917 SO libspdk_sock_posix.so.6.0 00:01:59.917 CC module/bdev/malloc/bdev_malloc.o 00:01:59.917 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:59.917 CC module/bdev/null/bdev_null.o 00:01:59.917 CC module/bdev/null/bdev_null_rpc.o 00:01:59.917 CC module/bdev/delay/vbdev_delay.o 00:01:59.917 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:59.917 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:59.917 SYMLINK libspdk_sock_posix.so 00:01:59.917 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:59.917 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:59.917 CC module/bdev/gpt/gpt.o 00:01:59.917 CC module/bdev/gpt/vbdev_gpt.o 00:01:59.917 CC module/bdev/iscsi/bdev_iscsi.o 00:01:59.917 CC module/bdev/error/vbdev_error.o 00:01:59.917 CC module/bdev/error/vbdev_error_rpc.o 00:01:59.917 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:59.917 CC module/bdev/nvme/bdev_nvme.o 00:01:59.917 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:59.917 CC module/bdev/nvme/nvme_rpc.o 00:01:59.917 CC module/bdev/lvol/vbdev_lvol.o 00:01:59.917 CC module/bdev/nvme/bdev_mdns_client.o 00:01:59.917 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:59.917 CC module/bdev/ftl/bdev_ftl.o 00:01:59.917 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:59.917 CC module/bdev/nvme/vbdev_opal.o 00:01:59.917 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:59.917 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:59.917 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:59.917 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:59.917 CC module/bdev/split/vbdev_split.o 00:01:59.917 CC module/bdev/split/vbdev_split_rpc.o 00:01:59.917 CC module/bdev/passthru/vbdev_passthru.o 00:01:59.917 CC module/bdev/aio/bdev_aio.o 00:01:59.917 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:59.917 CC module/bdev/aio/bdev_aio_rpc.o 00:01:59.917 CC module/bdev/raid/bdev_raid.o 00:01:59.917 CC module/bdev/raid/bdev_raid_rpc.o 00:01:59.917 CC module/bdev/raid/bdev_raid_sb.o 00:01:59.917 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:59.917 CC module/blobfs/bdev/blobfs_bdev.o 00:01:59.917 CC module/bdev/raid/raid0.o 00:01:59.917 CC module/bdev/raid/raid1.o 00:01:59.917 CC module/bdev/raid/concat.o 00:02:00.174 LIB libspdk_blobfs_bdev.a 00:02:00.174 SO libspdk_blobfs_bdev.so.6.0 00:02:00.174 LIB libspdk_bdev_null.a 00:02:00.174 LIB libspdk_bdev_error.a 00:02:00.174 LIB libspdk_bdev_split.a 00:02:00.174 LIB libspdk_bdev_gpt.a 00:02:00.174 SO libspdk_bdev_null.so.6.0 00:02:00.174 SYMLINK libspdk_blobfs_bdev.so 00:02:00.174 SO libspdk_bdev_error.so.6.0 00:02:00.174 LIB libspdk_bdev_ftl.a 00:02:00.174 SO libspdk_bdev_split.so.6.0 00:02:00.174 SO libspdk_bdev_gpt.so.6.0 00:02:00.433 LIB libspdk_bdev_passthru.a 00:02:00.433 LIB libspdk_bdev_malloc.a 00:02:00.433 SO libspdk_bdev_ftl.so.6.0 00:02:00.433 LIB libspdk_bdev_aio.a 00:02:00.433 LIB libspdk_bdev_iscsi.a 00:02:00.433 SYMLINK libspdk_bdev_null.so 00:02:00.433 LIB libspdk_bdev_delay.a 00:02:00.433 SYMLINK libspdk_bdev_error.so 00:02:00.433 LIB libspdk_bdev_zone_block.a 00:02:00.433 SO libspdk_bdev_malloc.so.6.0 00:02:00.433 SO libspdk_bdev_passthru.so.6.0 00:02:00.433 SO libspdk_bdev_iscsi.so.6.0 00:02:00.433 SO libspdk_bdev_aio.so.6.0 00:02:00.433 SYMLINK libspdk_bdev_split.so 00:02:00.433 SYMLINK libspdk_bdev_gpt.so 00:02:00.433 SYMLINK libspdk_bdev_ftl.so 00:02:00.433 SO libspdk_bdev_delay.so.6.0 00:02:00.433 SO libspdk_bdev_zone_block.so.6.0 00:02:00.433 SYMLINK libspdk_bdev_aio.so 00:02:00.433 SYMLINK libspdk_bdev_passthru.so 00:02:00.433 SYMLINK libspdk_bdev_malloc.so 00:02:00.433 SYMLINK libspdk_bdev_iscsi.so 00:02:00.433 LIB libspdk_bdev_virtio.a 00:02:00.433 SYMLINK libspdk_bdev_zone_block.so 00:02:00.433 SYMLINK libspdk_bdev_delay.so 00:02:00.433 LIB libspdk_bdev_lvol.a 00:02:00.433 SO libspdk_bdev_virtio.so.6.0 00:02:00.433 SO libspdk_bdev_lvol.so.6.0 00:02:00.692 SYMLINK libspdk_bdev_virtio.so 00:02:00.692 SYMLINK libspdk_bdev_lvol.so 00:02:00.951 LIB libspdk_bdev_raid.a 00:02:00.951 SO libspdk_bdev_raid.so.6.0 00:02:01.209 SYMLINK libspdk_bdev_raid.so 00:02:02.146 LIB libspdk_bdev_nvme.a 00:02:02.146 SO libspdk_bdev_nvme.so.7.0 00:02:02.146 SYMLINK libspdk_bdev_nvme.so 00:02:02.720 CC module/event/subsystems/iobuf/iobuf.o 00:02:02.720 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:03.000 CC module/event/subsystems/scheduler/scheduler.o 00:02:03.000 CC module/event/subsystems/sock/sock.o 00:02:03.000 CC module/event/subsystems/vmd/vmd.o 00:02:03.000 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:03.000 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:03.000 CC module/event/subsystems/keyring/keyring.o 00:02:03.000 LIB libspdk_event_iobuf.a 00:02:03.000 LIB libspdk_event_scheduler.a 00:02:03.000 LIB libspdk_event_vhost_blk.a 00:02:03.000 LIB libspdk_event_sock.a 00:02:03.000 LIB libspdk_event_keyring.a 00:02:03.000 LIB libspdk_event_vmd.a 00:02:03.000 SO libspdk_event_iobuf.so.3.0 00:02:03.000 SO libspdk_event_vhost_blk.so.3.0 00:02:03.000 SO libspdk_event_scheduler.so.4.0 00:02:03.000 SO libspdk_event_vmd.so.6.0 00:02:03.000 SO libspdk_event_sock.so.5.0 00:02:03.000 SO libspdk_event_keyring.so.1.0 00:02:03.000 SYMLINK libspdk_event_scheduler.so 00:02:03.000 SYMLINK libspdk_event_iobuf.so 00:02:03.275 SYMLINK libspdk_event_vhost_blk.so 00:02:03.275 SYMLINK libspdk_event_sock.so 00:02:03.275 SYMLINK libspdk_event_vmd.so 00:02:03.275 SYMLINK libspdk_event_keyring.so 00:02:03.533 CC module/event/subsystems/accel/accel.o 00:02:03.533 LIB libspdk_event_accel.a 00:02:03.792 SO libspdk_event_accel.so.6.0 00:02:03.792 SYMLINK libspdk_event_accel.so 00:02:04.049 CC module/event/subsystems/bdev/bdev.o 00:02:04.307 LIB libspdk_event_bdev.a 00:02:04.307 SO libspdk_event_bdev.so.6.0 00:02:04.307 SYMLINK libspdk_event_bdev.so 00:02:04.875 CC module/event/subsystems/scsi/scsi.o 00:02:04.875 CC module/event/subsystems/ublk/ublk.o 00:02:04.875 CC module/event/subsystems/nbd/nbd.o 00:02:04.875 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:04.875 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:04.875 LIB libspdk_event_scsi.a 00:02:04.875 LIB libspdk_event_ublk.a 00:02:04.875 SO libspdk_event_scsi.so.6.0 00:02:04.875 LIB libspdk_event_nbd.a 00:02:04.875 SO libspdk_event_ublk.so.3.0 00:02:04.875 SO libspdk_event_nbd.so.6.0 00:02:04.875 SYMLINK libspdk_event_scsi.so 00:02:04.875 LIB libspdk_event_nvmf.a 00:02:04.875 SYMLINK libspdk_event_ublk.so 00:02:04.875 SYMLINK libspdk_event_nbd.so 00:02:04.875 SO libspdk_event_nvmf.so.6.0 00:02:05.134 SYMLINK libspdk_event_nvmf.so 00:02:05.392 CC module/event/subsystems/iscsi/iscsi.o 00:02:05.392 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:05.392 LIB libspdk_event_vhost_scsi.a 00:02:05.392 LIB libspdk_event_iscsi.a 00:02:05.392 SO libspdk_event_iscsi.so.6.0 00:02:05.392 SO libspdk_event_vhost_scsi.so.3.0 00:02:05.651 SYMLINK libspdk_event_iscsi.so 00:02:05.651 SYMLINK libspdk_event_vhost_scsi.so 00:02:05.651 SO libspdk.so.6.0 00:02:05.651 SYMLINK libspdk.so 00:02:06.227 CC app/spdk_nvme_identify/identify.o 00:02:06.227 CXX app/trace/trace.o 00:02:06.227 CC app/spdk_nvme_discover/discovery_aer.o 00:02:06.227 CC app/trace_record/trace_record.o 00:02:06.227 TEST_HEADER include/spdk/accel.h 00:02:06.227 CC app/spdk_top/spdk_top.o 00:02:06.227 TEST_HEADER include/spdk/accel_module.h 00:02:06.227 TEST_HEADER include/spdk/base64.h 00:02:06.227 TEST_HEADER include/spdk/assert.h 00:02:06.227 TEST_HEADER include/spdk/barrier.h 00:02:06.227 TEST_HEADER include/spdk/bdev.h 00:02:06.227 TEST_HEADER include/spdk/bdev_zone.h 00:02:06.227 TEST_HEADER include/spdk/bdev_module.h 00:02:06.227 TEST_HEADER include/spdk/bit_array.h 00:02:06.227 TEST_HEADER include/spdk/bit_pool.h 00:02:06.227 CC app/spdk_lspci/spdk_lspci.o 00:02:06.227 TEST_HEADER include/spdk/blob_bdev.h 00:02:06.227 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:06.227 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:06.227 CC app/spdk_nvme_perf/perf.o 00:02:06.227 TEST_HEADER include/spdk/blob.h 00:02:06.227 TEST_HEADER include/spdk/blobfs.h 00:02:06.227 TEST_HEADER include/spdk/config.h 00:02:06.227 TEST_HEADER include/spdk/conf.h 00:02:06.227 TEST_HEADER include/spdk/cpuset.h 00:02:06.227 TEST_HEADER include/spdk/crc64.h 00:02:06.227 TEST_HEADER include/spdk/crc16.h 00:02:06.227 TEST_HEADER include/spdk/crc32.h 00:02:06.227 TEST_HEADER include/spdk/dif.h 00:02:06.227 CC test/rpc_client/rpc_client_test.o 00:02:06.227 TEST_HEADER include/spdk/dma.h 00:02:06.227 TEST_HEADER include/spdk/env_dpdk.h 00:02:06.227 TEST_HEADER include/spdk/endian.h 00:02:06.227 TEST_HEADER include/spdk/env.h 00:02:06.227 TEST_HEADER include/spdk/event.h 00:02:06.227 TEST_HEADER include/spdk/fd_group.h 00:02:06.227 TEST_HEADER include/spdk/file.h 00:02:06.227 TEST_HEADER include/spdk/fd.h 00:02:06.228 TEST_HEADER include/spdk/ftl.h 00:02:06.228 CC app/spdk_dd/spdk_dd.o 00:02:06.228 TEST_HEADER include/spdk/hexlify.h 00:02:06.228 TEST_HEADER include/spdk/gpt_spec.h 00:02:06.228 TEST_HEADER include/spdk/histogram_data.h 00:02:06.228 CC app/nvmf_tgt/nvmf_main.o 00:02:06.228 TEST_HEADER include/spdk/idxd.h 00:02:06.228 TEST_HEADER include/spdk/idxd_spec.h 00:02:06.228 TEST_HEADER include/spdk/init.h 00:02:06.228 TEST_HEADER include/spdk/ioat.h 00:02:06.228 TEST_HEADER include/spdk/ioat_spec.h 00:02:06.228 TEST_HEADER include/spdk/json.h 00:02:06.228 CC app/iscsi_tgt/iscsi_tgt.o 00:02:06.228 TEST_HEADER include/spdk/jsonrpc.h 00:02:06.228 TEST_HEADER include/spdk/iscsi_spec.h 00:02:06.228 TEST_HEADER include/spdk/keyring.h 00:02:06.228 TEST_HEADER include/spdk/keyring_module.h 00:02:06.228 TEST_HEADER include/spdk/likely.h 00:02:06.228 TEST_HEADER include/spdk/lvol.h 00:02:06.228 TEST_HEADER include/spdk/log.h 00:02:06.228 TEST_HEADER include/spdk/memory.h 00:02:06.228 CC app/spdk_tgt/spdk_tgt.o 00:02:06.228 TEST_HEADER include/spdk/mmio.h 00:02:06.228 TEST_HEADER include/spdk/nbd.h 00:02:06.228 TEST_HEADER include/spdk/net.h 00:02:06.228 TEST_HEADER include/spdk/notify.h 00:02:06.228 TEST_HEADER include/spdk/nvme_intel.h 00:02:06.228 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:06.228 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:06.228 TEST_HEADER include/spdk/nvme.h 00:02:06.228 TEST_HEADER include/spdk/nvme_spec.h 00:02:06.228 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:06.228 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:06.228 TEST_HEADER include/spdk/nvme_zns.h 00:02:06.228 TEST_HEADER include/spdk/nvmf.h 00:02:06.228 TEST_HEADER include/spdk/nvmf_transport.h 00:02:06.228 TEST_HEADER include/spdk/opal_spec.h 00:02:06.228 TEST_HEADER include/spdk/nvmf_spec.h 00:02:06.228 TEST_HEADER include/spdk/opal.h 00:02:06.228 TEST_HEADER include/spdk/pci_ids.h 00:02:06.228 TEST_HEADER include/spdk/queue.h 00:02:06.228 TEST_HEADER include/spdk/pipe.h 00:02:06.228 TEST_HEADER include/spdk/reduce.h 00:02:06.228 TEST_HEADER include/spdk/scheduler.h 00:02:06.228 TEST_HEADER include/spdk/rpc.h 00:02:06.228 TEST_HEADER include/spdk/scsi.h 00:02:06.228 TEST_HEADER include/spdk/sock.h 00:02:06.228 TEST_HEADER include/spdk/scsi_spec.h 00:02:06.228 TEST_HEADER include/spdk/stdinc.h 00:02:06.228 TEST_HEADER include/spdk/string.h 00:02:06.228 TEST_HEADER include/spdk/trace.h 00:02:06.228 TEST_HEADER include/spdk/thread.h 00:02:06.228 TEST_HEADER include/spdk/tree.h 00:02:06.228 TEST_HEADER include/spdk/trace_parser.h 00:02:06.228 TEST_HEADER include/spdk/ublk.h 00:02:06.228 TEST_HEADER include/spdk/version.h 00:02:06.228 TEST_HEADER include/spdk/util.h 00:02:06.228 TEST_HEADER include/spdk/uuid.h 00:02:06.228 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:06.228 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:06.228 TEST_HEADER include/spdk/vhost.h 00:02:06.228 TEST_HEADER include/spdk/vmd.h 00:02:06.228 CXX test/cpp_headers/accel.o 00:02:06.228 TEST_HEADER include/spdk/zipf.h 00:02:06.228 TEST_HEADER include/spdk/xor.h 00:02:06.228 CXX test/cpp_headers/assert.o 00:02:06.228 CXX test/cpp_headers/accel_module.o 00:02:06.228 CXX test/cpp_headers/base64.o 00:02:06.228 CXX test/cpp_headers/barrier.o 00:02:06.228 CXX test/cpp_headers/bdev.o 00:02:06.228 CXX test/cpp_headers/bdev_module.o 00:02:06.228 CXX test/cpp_headers/bdev_zone.o 00:02:06.228 CXX test/cpp_headers/bit_array.o 00:02:06.228 CXX test/cpp_headers/blob_bdev.o 00:02:06.228 CXX test/cpp_headers/bit_pool.o 00:02:06.228 CXX test/cpp_headers/blobfs_bdev.o 00:02:06.228 CXX test/cpp_headers/blobfs.o 00:02:06.228 CXX test/cpp_headers/blob.o 00:02:06.228 CXX test/cpp_headers/config.o 00:02:06.228 CXX test/cpp_headers/conf.o 00:02:06.228 CXX test/cpp_headers/crc16.o 00:02:06.228 CXX test/cpp_headers/cpuset.o 00:02:06.228 CXX test/cpp_headers/crc64.o 00:02:06.228 CXX test/cpp_headers/dif.o 00:02:06.228 CXX test/cpp_headers/dma.o 00:02:06.228 CXX test/cpp_headers/crc32.o 00:02:06.228 CXX test/cpp_headers/endian.o 00:02:06.228 CXX test/cpp_headers/env_dpdk.o 00:02:06.228 CXX test/cpp_headers/env.o 00:02:06.228 CXX test/cpp_headers/event.o 00:02:06.228 CXX test/cpp_headers/fd.o 00:02:06.228 CXX test/cpp_headers/fd_group.o 00:02:06.228 CXX test/cpp_headers/gpt_spec.o 00:02:06.228 CXX test/cpp_headers/ftl.o 00:02:06.228 CXX test/cpp_headers/file.o 00:02:06.228 CXX test/cpp_headers/histogram_data.o 00:02:06.228 CXX test/cpp_headers/hexlify.o 00:02:06.228 CXX test/cpp_headers/idxd_spec.o 00:02:06.228 CXX test/cpp_headers/idxd.o 00:02:06.228 CXX test/cpp_headers/ioat.o 00:02:06.228 CXX test/cpp_headers/init.o 00:02:06.228 CXX test/cpp_headers/ioat_spec.o 00:02:06.228 CXX test/cpp_headers/iscsi_spec.o 00:02:06.228 CXX test/cpp_headers/json.o 00:02:06.228 CXX test/cpp_headers/jsonrpc.o 00:02:06.228 CXX test/cpp_headers/keyring.o 00:02:06.228 CXX test/cpp_headers/log.o 00:02:06.228 CXX test/cpp_headers/keyring_module.o 00:02:06.228 CXX test/cpp_headers/likely.o 00:02:06.228 CXX test/cpp_headers/lvol.o 00:02:06.228 CXX test/cpp_headers/memory.o 00:02:06.228 CXX test/cpp_headers/mmio.o 00:02:06.228 CXX test/cpp_headers/notify.o 00:02:06.228 CXX test/cpp_headers/nbd.o 00:02:06.228 CXX test/cpp_headers/net.o 00:02:06.228 CXX test/cpp_headers/nvme_intel.o 00:02:06.228 CXX test/cpp_headers/nvme.o 00:02:06.228 CXX test/cpp_headers/nvme_ocssd.o 00:02:06.228 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:06.228 CXX test/cpp_headers/nvme_spec.o 00:02:06.228 CXX test/cpp_headers/nvme_zns.o 00:02:06.228 CXX test/cpp_headers/nvmf_cmd.o 00:02:06.228 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:06.228 CC examples/util/zipf/zipf.o 00:02:06.228 CXX test/cpp_headers/nvmf.o 00:02:06.228 CXX test/cpp_headers/nvmf_spec.o 00:02:06.228 CXX test/cpp_headers/nvmf_transport.o 00:02:06.228 CXX test/cpp_headers/opal.o 00:02:06.228 CXX test/cpp_headers/opal_spec.o 00:02:06.228 CXX test/cpp_headers/pci_ids.o 00:02:06.228 CXX test/cpp_headers/pipe.o 00:02:06.228 CXX test/cpp_headers/queue.o 00:02:06.228 CXX test/cpp_headers/reduce.o 00:02:06.228 CXX test/cpp_headers/rpc.o 00:02:06.228 CC test/thread/poller_perf/poller_perf.o 00:02:06.228 CXX test/cpp_headers/scheduler.o 00:02:06.228 CXX test/cpp_headers/scsi.o 00:02:06.228 CC test/app/histogram_perf/histogram_perf.o 00:02:06.228 CC app/fio/nvme/fio_plugin.o 00:02:06.228 CXX test/cpp_headers/scsi_spec.o 00:02:06.228 CXX test/cpp_headers/stdinc.o 00:02:06.228 CXX test/cpp_headers/sock.o 00:02:06.228 CXX test/cpp_headers/string.o 00:02:06.228 CC examples/ioat/verify/verify.o 00:02:06.228 CXX test/cpp_headers/thread.o 00:02:06.228 CXX test/cpp_headers/trace.o 00:02:06.228 CXX test/cpp_headers/tree.o 00:02:06.228 CXX test/cpp_headers/trace_parser.o 00:02:06.228 CC test/env/vtophys/vtophys.o 00:02:06.228 CXX test/cpp_headers/ublk.o 00:02:06.228 CXX test/cpp_headers/util.o 00:02:06.228 CXX test/cpp_headers/uuid.o 00:02:06.228 CXX test/cpp_headers/version.o 00:02:06.228 CC test/app/jsoncat/jsoncat.o 00:02:06.228 CC examples/ioat/perf/perf.o 00:02:06.508 CC test/app/stub/stub.o 00:02:06.508 CC test/env/memory/memory_ut.o 00:02:06.508 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:06.508 CC test/env/pci/pci_ut.o 00:02:06.508 CXX test/cpp_headers/vfio_user_pci.o 00:02:06.508 CC app/fio/bdev/fio_plugin.o 00:02:06.508 CC test/dma/test_dma/test_dma.o 00:02:06.508 LINK spdk_lspci 00:02:06.508 CXX test/cpp_headers/vfio_user_spec.o 00:02:06.508 CC test/app/bdev_svc/bdev_svc.o 00:02:06.788 LINK nvmf_tgt 00:02:06.788 LINK interrupt_tgt 00:02:06.788 LINK spdk_nvme_discover 00:02:06.788 LINK rpc_client_test 00:02:06.788 LINK iscsi_tgt 00:02:06.788 LINK spdk_tgt 00:02:07.051 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.051 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.051 LINK poller_perf 00:02:07.051 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:07.051 LINK zipf 00:02:07.051 LINK vtophys 00:02:07.051 CXX test/cpp_headers/vhost.o 00:02:07.051 LINK jsoncat 00:02:07.051 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:07.051 LINK histogram_perf 00:02:07.051 CXX test/cpp_headers/vmd.o 00:02:07.051 CXX test/cpp_headers/xor.o 00:02:07.051 CXX test/cpp_headers/zipf.o 00:02:07.051 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:07.051 LINK spdk_trace_record 00:02:07.051 LINK env_dpdk_post_init 00:02:07.051 LINK verify 00:02:07.051 LINK stub 00:02:07.051 LINK bdev_svc 00:02:07.310 LINK ioat_perf 00:02:07.310 LINK spdk_dd 00:02:07.310 LINK spdk_trace 00:02:07.310 LINK test_dma 00:02:07.310 LINK pci_ut 00:02:07.570 LINK spdk_bdev 00:02:07.570 LINK nvme_fuzz 00:02:07.570 LINK vhost_fuzz 00:02:07.570 LINK spdk_nvme 00:02:07.570 CC test/event/reactor/reactor.o 00:02:07.570 LINK mem_callbacks 00:02:07.570 CC examples/idxd/perf/perf.o 00:02:07.570 CC examples/sock/hello_world/hello_sock.o 00:02:07.570 CC test/event/event_perf/event_perf.o 00:02:07.570 CC examples/vmd/led/led.o 00:02:07.570 CC test/event/reactor_perf/reactor_perf.o 00:02:07.570 CC examples/vmd/lsvmd/lsvmd.o 00:02:07.570 CC app/vhost/vhost.o 00:02:07.570 CC test/event/app_repeat/app_repeat.o 00:02:07.570 CC test/event/scheduler/scheduler.o 00:02:07.570 LINK spdk_nvme_identify 00:02:07.570 CC examples/thread/thread/thread_ex.o 00:02:07.829 LINK spdk_top 00:02:07.829 LINK reactor 00:02:07.829 LINK spdk_nvme_perf 00:02:07.829 LINK lsvmd 00:02:07.829 LINK event_perf 00:02:07.829 LINK reactor_perf 00:02:07.829 CC test/nvme/err_injection/err_injection.o 00:02:07.829 CC test/nvme/startup/startup.o 00:02:07.829 CC test/nvme/fused_ordering/fused_ordering.o 00:02:07.829 CC test/nvme/e2edp/nvme_dp.o 00:02:07.829 CC test/nvme/cuse/cuse.o 00:02:07.829 CC test/nvme/overhead/overhead.o 00:02:07.829 CC test/nvme/sgl/sgl.o 00:02:07.829 LINK led 00:02:07.829 LINK app_repeat 00:02:07.829 CC test/nvme/reset/reset.o 00:02:07.829 CC test/nvme/simple_copy/simple_copy.o 00:02:07.829 CC test/nvme/connect_stress/connect_stress.o 00:02:07.829 CC test/nvme/aer/aer.o 00:02:07.829 CC test/nvme/compliance/nvme_compliance.o 00:02:07.829 CC test/nvme/reserve/reserve.o 00:02:07.829 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:07.829 CC test/nvme/boot_partition/boot_partition.o 00:02:07.829 CC test/nvme/fdp/fdp.o 00:02:07.829 CC test/blobfs/mkfs/mkfs.o 00:02:07.829 LINK vhost 00:02:07.829 CC test/accel/dif/dif.o 00:02:07.829 LINK hello_sock 00:02:07.829 LINK scheduler 00:02:08.088 LINK thread 00:02:08.088 CC test/lvol/esnap/esnap.o 00:02:08.088 LINK memory_ut 00:02:08.088 LINK idxd_perf 00:02:08.088 LINK boot_partition 00:02:08.088 LINK startup 00:02:08.088 LINK err_injection 00:02:08.088 LINK connect_stress 00:02:08.088 LINK doorbell_aers 00:02:08.088 LINK fused_ordering 00:02:08.088 LINK reserve 00:02:08.088 LINK mkfs 00:02:08.088 LINK simple_copy 00:02:08.088 LINK sgl 00:02:08.088 LINK reset 00:02:08.088 LINK nvme_dp 00:02:08.088 LINK aer 00:02:08.088 LINK overhead 00:02:08.088 LINK fdp 00:02:08.347 LINK nvme_compliance 00:02:08.347 LINK dif 00:02:08.347 CC examples/nvme/arbitration/arbitration.o 00:02:08.347 CC examples/nvme/reconnect/reconnect.o 00:02:08.347 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:08.347 CC examples/nvme/hello_world/hello_world.o 00:02:08.347 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:08.347 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:08.347 CC examples/nvme/abort/abort.o 00:02:08.347 CC examples/nvme/hotplug/hotplug.o 00:02:08.606 CC examples/accel/perf/accel_perf.o 00:02:08.606 LINK cmb_copy 00:02:08.606 CC examples/blob/hello_world/hello_blob.o 00:02:08.606 CC examples/blob/cli/blobcli.o 00:02:08.606 LINK pmr_persistence 00:02:08.606 LINK hello_world 00:02:08.606 LINK hotplug 00:02:08.606 LINK arbitration 00:02:08.606 LINK reconnect 00:02:08.867 LINK abort 00:02:08.867 LINK iscsi_fuzz 00:02:08.867 LINK hello_blob 00:02:08.867 LINK nvme_manage 00:02:08.867 CC test/bdev/bdevio/bdevio.o 00:02:08.867 LINK accel_perf 00:02:09.129 LINK blobcli 00:02:09.129 LINK cuse 00:02:09.388 LINK bdevio 00:02:09.388 CC examples/bdev/hello_world/hello_bdev.o 00:02:09.646 CC examples/bdev/bdevperf/bdevperf.o 00:02:09.646 LINK hello_bdev 00:02:10.213 LINK bdevperf 00:02:10.781 CC examples/nvmf/nvmf/nvmf.o 00:02:11.040 LINK nvmf 00:02:12.418 LINK esnap 00:02:12.987 00:02:12.987 real 0m53.461s 00:02:12.987 user 6m50.367s 00:02:12.987 sys 4m13.245s 00:02:12.987 06:51:27 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:12.987 06:51:27 make -- common/autotest_common.sh@10 -- $ set +x 00:02:12.987 ************************************ 00:02:12.987 END TEST make 00:02:12.987 ************************************ 00:02:12.987 06:51:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:12.987 06:51:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:12.987 06:51:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:12.987 06:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.987 06:51:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:12.987 06:51:27 -- pm/common@44 -- $ pid=1314329 00:02:12.987 06:51:27 -- pm/common@50 -- $ kill -TERM 1314329 00:02:12.987 06:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.987 06:51:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:12.987 06:51:27 -- pm/common@44 -- $ pid=1314331 00:02:12.987 06:51:27 -- pm/common@50 -- $ kill -TERM 1314331 00:02:12.987 06:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.987 06:51:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:12.987 06:51:27 -- pm/common@44 -- $ pid=1314333 00:02:12.987 06:51:27 -- pm/common@50 -- $ kill -TERM 1314333 00:02:12.987 06:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.987 06:51:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:12.987 06:51:27 -- pm/common@44 -- $ pid=1314355 00:02:12.987 06:51:27 -- pm/common@50 -- $ sudo -E kill -TERM 1314355 00:02:12.987 06:51:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:12.987 06:51:27 -- nvmf/common.sh@7 -- # uname -s 00:02:12.987 06:51:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:12.987 06:51:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:12.987 06:51:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:12.987 06:51:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:12.987 06:51:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:12.987 06:51:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:12.987 06:51:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:12.987 06:51:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:12.987 06:51:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:12.987 06:51:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:12.987 06:51:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:12.987 06:51:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:12.987 06:51:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:12.987 06:51:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:12.987 06:51:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:12.987 06:51:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:12.987 06:51:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:12.987 06:51:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:12.987 06:51:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.987 06:51:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.987 06:51:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.987 06:51:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.987 06:51:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.987 06:51:27 -- paths/export.sh@5 -- # export PATH 00:02:12.987 06:51:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.987 06:51:27 -- nvmf/common.sh@47 -- # : 0 00:02:12.987 06:51:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:12.987 06:51:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:12.987 06:51:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:12.987 06:51:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:12.987 06:51:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:12.987 06:51:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:12.987 06:51:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:12.987 06:51:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:12.987 06:51:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:12.987 06:51:27 -- spdk/autotest.sh@32 -- # uname -s 00:02:12.987 06:51:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:12.987 06:51:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:12.987 06:51:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:12.988 06:51:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:12.988 06:51:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:12.988 06:51:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:12.988 06:51:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:12.988 06:51:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:12.988 06:51:27 -- spdk/autotest.sh@48 -- # udevadm_pid=1376118 00:02:12.988 06:51:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:12.988 06:51:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:12.988 06:51:27 -- pm/common@17 -- # local monitor 00:02:12.988 06:51:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.988 06:51:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.988 06:51:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.988 06:51:27 -- pm/common@21 -- # date +%s 00:02:12.988 06:51:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.988 06:51:27 -- pm/common@21 -- # date +%s 00:02:12.988 06:51:27 -- pm/common@25 -- # sleep 1 00:02:12.988 06:51:27 -- pm/common@21 -- # date +%s 00:02:12.988 06:51:27 -- pm/common@21 -- # date +%s 00:02:12.988 06:51:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721796687 00:02:12.988 06:51:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721796687 00:02:12.988 06:51:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721796687 00:02:12.988 06:51:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721796687 00:02:13.247 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721796687_collect-vmstat.pm.log 00:02:13.247 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721796687_collect-cpu-load.pm.log 00:02:13.247 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721796687_collect-cpu-temp.pm.log 00:02:13.247 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721796687_collect-bmc-pm.bmc.pm.log 00:02:14.185 06:51:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:14.185 06:51:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:14.185 06:51:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:14.185 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:02:14.185 06:51:28 -- spdk/autotest.sh@59 -- # create_test_list 00:02:14.185 06:51:28 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:14.185 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:02:14.185 06:51:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:14.185 06:51:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:14.185 06:51:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:14.185 06:51:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:14.185 06:51:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:14.185 06:51:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:14.185 06:51:28 -- common/autotest_common.sh@1453 -- # uname 00:02:14.185 06:51:28 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:02:14.185 06:51:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:14.185 06:51:28 -- common/autotest_common.sh@1473 -- # uname 00:02:14.185 06:51:28 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:02:14.185 06:51:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:14.185 06:51:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:14.185 06:51:28 -- spdk/autotest.sh@72 -- # hash lcov 00:02:14.185 06:51:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:14.185 06:51:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:14.185 --rc lcov_branch_coverage=1 00:02:14.185 --rc lcov_function_coverage=1 00:02:14.185 --rc genhtml_branch_coverage=1 00:02:14.185 --rc genhtml_function_coverage=1 00:02:14.185 --rc genhtml_legend=1 00:02:14.185 --rc geninfo_all_blocks=1 00:02:14.185 ' 00:02:14.185 06:51:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:14.185 --rc lcov_branch_coverage=1 00:02:14.185 --rc lcov_function_coverage=1 00:02:14.185 --rc genhtml_branch_coverage=1 00:02:14.185 --rc genhtml_function_coverage=1 00:02:14.185 --rc genhtml_legend=1 00:02:14.185 --rc geninfo_all_blocks=1 00:02:14.185 ' 00:02:14.185 06:51:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:14.185 --rc lcov_branch_coverage=1 00:02:14.185 --rc lcov_function_coverage=1 00:02:14.185 --rc genhtml_branch_coverage=1 00:02:14.185 --rc genhtml_function_coverage=1 00:02:14.185 --rc genhtml_legend=1 00:02:14.185 --rc geninfo_all_blocks=1 00:02:14.185 --no-external' 00:02:14.185 06:51:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:14.185 --rc lcov_branch_coverage=1 00:02:14.185 --rc lcov_function_coverage=1 00:02:14.185 --rc genhtml_branch_coverage=1 00:02:14.185 --rc genhtml_function_coverage=1 00:02:14.185 --rc genhtml_legend=1 00:02:14.185 --rc geninfo_all_blocks=1 00:02:14.185 --no-external' 00:02:14.185 06:51:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:14.185 lcov: LCOV version 1.14 00:02:14.185 06:51:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:15.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:15.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:15.564 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:15.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:15.564 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:15.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:15.564 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:15.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:15.564 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:15.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:15.823 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:15.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:16.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:16.083 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:16.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:16.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:16.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:16.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:16.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:16.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:16.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:16.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:16.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:16.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:16.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:16.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:16.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:16.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:16.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:16.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:16.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:16.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:28.607 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:38.586 06:51:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:38.586 06:51:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:38.586 06:51:53 -- common/autotest_common.sh@10 -- # set +x 00:02:38.586 06:51:53 -- spdk/autotest.sh@91 -- # rm -f 00:02:38.586 06:51:53 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.781 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:42.781 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:42.781 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:42.781 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:42.781 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:43.040 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:43.040 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:43.040 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:43.040 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:43.040 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:43.040 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:43.040 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:43.040 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:43.040 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:43.299 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:43.299 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:43.299 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:43.299 06:51:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:43.299 06:51:57 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:43.299 06:51:57 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:43.299 06:51:57 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:43.299 06:51:57 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:43.299 06:51:57 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:43.299 06:51:57 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:43.299 06:51:57 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.299 06:51:57 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:43.299 06:51:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:43.299 06:51:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:43.299 06:51:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:43.299 06:51:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:43.299 06:51:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:43.299 06:51:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:43.299 No valid GPT data, bailing 00:02:43.299 06:51:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:43.299 06:51:57 -- scripts/common.sh@391 -- # pt= 00:02:43.299 06:51:57 -- scripts/common.sh@392 -- # return 1 00:02:43.299 06:51:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:43.299 1+0 records in 00:02:43.299 1+0 records out 00:02:43.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00183972 s, 570 MB/s 00:02:43.299 06:51:57 -- spdk/autotest.sh@118 -- # sync 00:02:43.299 06:51:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:43.299 06:51:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:43.299 06:51:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:49.866 06:52:04 -- spdk/autotest.sh@124 -- # uname -s 00:02:49.866 06:52:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:49.866 06:52:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.866 06:52:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.866 06:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.866 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:02:49.866 ************************************ 00:02:49.866 START TEST setup.sh 00:02:49.866 ************************************ 00:02:49.866 06:52:04 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.866 * Looking for test storage... 00:02:49.866 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:49.866 06:52:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:49.866 06:52:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:49.866 06:52:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:49.866 06:52:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.866 06:52:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.866 06:52:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:49.866 ************************************ 00:02:49.866 START TEST acl 00:02:49.866 ************************************ 00:02:49.866 06:52:04 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:50.125 * Looking for test storage... 00:02:50.125 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:50.125 06:52:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.125 06:52:04 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:50.125 06:52:04 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:50.125 06:52:04 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:50.125 06:52:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:50.125 06:52:04 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:50.125 06:52:04 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:50.125 06:52:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.125 06:52:04 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:50.125 06:52:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:50.125 06:52:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:50.125 06:52:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:50.125 06:52:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.125 06:52:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:50.125 06:52:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.125 06:52:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.381 06:52:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:54.381 06:52:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:54.381 06:52:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.381 06:52:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:54.381 06:52:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.381 06:52:08 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:58.578 Hugepages 00:02:58.578 node hugesize free / total 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 00:02:58.578 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.578 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:58.579 06:52:12 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:58.579 06:52:12 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.579 06:52:12 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.579 06:52:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.579 ************************************ 00:02:58.579 START TEST denied 00:02:58.579 ************************************ 00:02:58.579 06:52:13 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:58.579 06:52:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:58.579 06:52:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:58.579 06:52:13 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:58.579 06:52:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.579 06:52:13 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:03.858 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.858 06:52:17 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.053 00:03:08.053 real 0m9.495s 00:03:08.053 user 0m2.907s 00:03:08.053 sys 0m5.874s 00:03:08.053 06:52:22 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.053 06:52:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:08.053 ************************************ 00:03:08.053 END TEST denied 00:03:08.053 ************************************ 00:03:08.053 06:52:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:08.053 06:52:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.053 06:52:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.053 06:52:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:08.053 ************************************ 00:03:08.053 START TEST allowed 00:03:08.053 ************************************ 00:03:08.053 06:52:22 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:08.053 06:52:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:08.053 06:52:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:08.053 06:52:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:08.053 06:52:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.053 06:52:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:14.626 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.626 06:52:28 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:14.626 06:52:28 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:14.626 06:52:28 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:14.626 06:52:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.626 06:52:28 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.828 00:03:18.828 real 0m10.455s 00:03:18.829 user 0m2.914s 00:03:18.829 sys 0m5.750s 00:03:18.829 06:52:33 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.829 06:52:33 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:18.829 ************************************ 00:03:18.829 END TEST allowed 00:03:18.829 ************************************ 00:03:18.829 00:03:18.829 real 0m28.673s 00:03:18.829 user 0m8.905s 00:03:18.829 sys 0m17.530s 00:03:18.829 06:52:33 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.829 06:52:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.829 ************************************ 00:03:18.829 END TEST acl 00:03:18.829 ************************************ 00:03:18.829 06:52:33 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.829 06:52:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.829 06:52:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.829 06:52:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.829 ************************************ 00:03:18.829 START TEST hugepages 00:03:18.829 ************************************ 00:03:18.829 06:52:33 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.829 * Looking for test storage... 00:03:18.829 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:18.829 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.829 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.829 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.829 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.829 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.829 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36655076 kB' 'MemAvailable: 40745288 kB' 'Buffers: 4096 kB' 'Cached: 14965604 kB' 'SwapCached: 0 kB' 'Active: 11779128 kB' 'Inactive: 3698840 kB' 'Active(anon): 11300900 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511720 kB' 'Mapped: 204156 kB' 'Shmem: 10792632 kB' 'KReclaimable: 560636 kB' 'Slab: 1269372 kB' 'SReclaimable: 560636 kB' 'SUnreclaim: 708736 kB' 'KernelStack: 22704 kB' 'PageTables: 9784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 12778796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220096 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.830 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.831 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.832 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.833 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.834 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.837 06:52:33 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.837 06:52:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.837 06:52:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.837 06:52:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.837 ************************************ 00:03:18.837 START TEST default_setup 00:03:18.837 ************************************ 00:03:18.837 06:52:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:18.837 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.837 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.838 06:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:23.064 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:23.064 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:24.973 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38798528 kB' 'MemAvailable: 42888612 kB' 'Buffers: 4096 kB' 'Cached: 14965760 kB' 'SwapCached: 0 kB' 'Active: 11800684 kB' 'Inactive: 3698840 kB' 'Active(anon): 11322456 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533168 kB' 'Mapped: 204724 kB' 'Shmem: 10792788 kB' 'KReclaimable: 560508 kB' 'Slab: 1267108 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 706600 kB' 'KernelStack: 22608 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12799228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:24.974 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38798232 kB' 'MemAvailable: 42888316 kB' 'Buffers: 4096 kB' 'Cached: 14965760 kB' 'SwapCached: 0 kB' 'Active: 11802212 kB' 'Inactive: 3698840 kB' 'Active(anon): 11323984 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534640 kB' 'Mapped: 205144 kB' 'Shmem: 10792788 kB' 'KReclaimable: 560508 kB' 'Slab: 1267108 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 706600 kB' 'KernelStack: 22624 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12800444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220164 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:24.976 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38800720 kB' 'MemAvailable: 42890804 kB' 'Buffers: 4096 kB' 'Cached: 14965780 kB' 'SwapCached: 0 kB' 'Active: 11796820 kB' 'Inactive: 3698840 kB' 'Active(anon): 11318592 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529124 kB' 'Mapped: 205196 kB' 'Shmem: 10792808 kB' 'KReclaimable: 560508 kB' 'Slab: 1267108 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 706600 kB' 'KernelStack: 22656 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12795704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220064 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.979 nr_hugepages=1024 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.979 resv_hugepages=0 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.979 surplus_hugepages=0 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.979 anon_hugepages=0 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38801136 kB' 'MemAvailable: 42891220 kB' 'Buffers: 4096 kB' 'Cached: 14965796 kB' 'SwapCached: 0 kB' 'Active: 11801648 kB' 'Inactive: 3698840 kB' 'Active(anon): 11323420 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534420 kB' 'Mapped: 204784 kB' 'Shmem: 10792824 kB' 'KReclaimable: 560508 kB' 'Slab: 1267108 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 706600 kB' 'KernelStack: 22624 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12800488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220116 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22351332 kB' 'MemUsed: 10240752 kB' 'SwapCached: 0 kB' 'Active: 6369316 kB' 'Inactive: 410960 kB' 'Active(anon): 6092004 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6600548 kB' 'Mapped: 72500 kB' 'AnonPages: 182848 kB' 'Shmem: 5912276 kB' 'KernelStack: 12024 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 377008 kB' 'Slab: 711232 kB' 'SReclaimable: 377008 kB' 'SUnreclaim: 334224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.982 node0=1024 expecting 1024 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.982 00:03:24.982 real 0m6.076s 00:03:24.982 user 0m1.585s 00:03:24.982 sys 0m2.641s 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.982 06:52:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:24.982 ************************************ 00:03:24.982 END TEST default_setup 00:03:24.982 ************************************ 00:03:24.982 06:52:39 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:24.982 06:52:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.982 06:52:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.982 06:52:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.982 ************************************ 00:03:24.982 START TEST per_node_1G_alloc 00:03:24.982 ************************************ 00:03:24.982 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:24.982 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:24.982 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:24.982 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.983 06:52:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:29.179 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.179 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38806028 kB' 'MemAvailable: 42896112 kB' 'Buffers: 4096 kB' 'Cached: 14965904 kB' 'SwapCached: 0 kB' 'Active: 11794188 kB' 'Inactive: 3698840 kB' 'Active(anon): 11315960 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525868 kB' 'Mapped: 203216 kB' 'Shmem: 10792932 kB' 'KReclaimable: 560508 kB' 'Slab: 1268056 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707548 kB' 'KernelStack: 22528 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12785784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220160 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38805776 kB' 'MemAvailable: 42895860 kB' 'Buffers: 4096 kB' 'Cached: 14965904 kB' 'SwapCached: 0 kB' 'Active: 11794660 kB' 'Inactive: 3698840 kB' 'Active(anon): 11316432 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526392 kB' 'Mapped: 203208 kB' 'Shmem: 10792932 kB' 'KReclaimable: 560508 kB' 'Slab: 1268056 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707548 kB' 'KernelStack: 22544 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12785800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220128 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.181 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.182 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38805524 kB' 'MemAvailable: 42895608 kB' 'Buffers: 4096 kB' 'Cached: 14965904 kB' 'SwapCached: 0 kB' 'Active: 11793700 kB' 'Inactive: 3698840 kB' 'Active(anon): 11315472 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525900 kB' 'Mapped: 203132 kB' 'Shmem: 10792932 kB' 'KReclaimable: 560508 kB' 'Slab: 1268040 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707532 kB' 'KernelStack: 22560 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12785824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220128 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.183 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.184 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.185 nr_hugepages=1024 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.185 resv_hugepages=0 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.185 surplus_hugepages=0 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.185 anon_hugepages=0 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.185 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38806208 kB' 'MemAvailable: 42896292 kB' 'Buffers: 4096 kB' 'Cached: 14965948 kB' 'SwapCached: 0 kB' 'Active: 11793856 kB' 'Inactive: 3698840 kB' 'Active(anon): 11315628 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526028 kB' 'Mapped: 203132 kB' 'Shmem: 10792976 kB' 'KReclaimable: 560508 kB' 'Slab: 1268040 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707532 kB' 'KernelStack: 22528 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12785848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220128 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.186 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.187 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 23407672 kB' 'MemUsed: 9184412 kB' 'SwapCached: 0 kB' 'Active: 6368428 kB' 'Inactive: 410960 kB' 'Active(anon): 6091116 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6600676 kB' 'Mapped: 71820 kB' 'AnonPages: 181876 kB' 'Shmem: 5912404 kB' 'KernelStack: 12040 kB' 'PageTables: 4952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 377008 kB' 'Slab: 712332 kB' 'SReclaimable: 377008 kB' 'SUnreclaim: 335324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15398756 kB' 'MemUsed: 12304352 kB' 'SwapCached: 0 kB' 'Active: 5425272 kB' 'Inactive: 3287880 kB' 'Active(anon): 5224356 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3287880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8369388 kB' 'Mapped: 131312 kB' 'AnonPages: 343972 kB' 'Shmem: 4880592 kB' 'KernelStack: 10472 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183500 kB' 'Slab: 555708 kB' 'SReclaimable: 183500 kB' 'SUnreclaim: 372208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.189 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.190 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:29.191 node0=512 expecting 512 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:29.191 node1=512 expecting 512 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:29.191 00:03:29.191 real 0m3.805s 00:03:29.191 user 0m1.329s 00:03:29.191 sys 0m2.476s 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.191 06:52:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.191 ************************************ 00:03:29.191 END TEST per_node_1G_alloc 00:03:29.191 ************************************ 00:03:29.191 06:52:43 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:29.191 06:52:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.191 06:52:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.191 06:52:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.191 ************************************ 00:03:29.191 START TEST even_2G_alloc 00:03:29.191 ************************************ 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.191 06:52:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:32.488 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.488 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.488 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38821428 kB' 'MemAvailable: 42911512 kB' 'Buffers: 4096 kB' 'Cached: 14966072 kB' 'SwapCached: 0 kB' 'Active: 11795700 kB' 'Inactive: 3698840 kB' 'Active(anon): 11317472 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527640 kB' 'Mapped: 203148 kB' 'Shmem: 10793100 kB' 'KReclaimable: 560508 kB' 'Slab: 1267892 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707384 kB' 'KernelStack: 22608 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12789316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220160 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.489 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38822916 kB' 'MemAvailable: 42913000 kB' 'Buffers: 4096 kB' 'Cached: 14966076 kB' 'SwapCached: 0 kB' 'Active: 11794820 kB' 'Inactive: 3698840 kB' 'Active(anon): 11316592 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526756 kB' 'Mapped: 203156 kB' 'Shmem: 10793104 kB' 'KReclaimable: 560508 kB' 'Slab: 1267948 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707440 kB' 'KernelStack: 22544 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12789332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220224 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.490 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.491 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38822244 kB' 'MemAvailable: 42912328 kB' 'Buffers: 4096 kB' 'Cached: 14966092 kB' 'SwapCached: 0 kB' 'Active: 11795604 kB' 'Inactive: 3698840 kB' 'Active(anon): 11317376 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527516 kB' 'Mapped: 203156 kB' 'Shmem: 10793120 kB' 'KReclaimable: 560508 kB' 'Slab: 1267948 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707440 kB' 'KernelStack: 22656 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12788988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220288 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.492 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.493 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.494 nr_hugepages=1024 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.494 resv_hugepages=0 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.494 surplus_hugepages=0 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.494 anon_hugepages=0 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.494 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38822036 kB' 'MemAvailable: 42912120 kB' 'Buffers: 4096 kB' 'Cached: 14966112 kB' 'SwapCached: 0 kB' 'Active: 11795532 kB' 'Inactive: 3698840 kB' 'Active(anon): 11317304 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527332 kB' 'Mapped: 203156 kB' 'Shmem: 10793140 kB' 'KReclaimable: 560508 kB' 'Slab: 1267948 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707440 kB' 'KernelStack: 22656 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12789008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220256 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.495 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 23422488 kB' 'MemUsed: 9169596 kB' 'SwapCached: 0 kB' 'Active: 6368256 kB' 'Inactive: 410960 kB' 'Active(anon): 6090944 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6600800 kB' 'Mapped: 71832 kB' 'AnonPages: 181524 kB' 'Shmem: 5912528 kB' 'KernelStack: 12024 kB' 'PageTables: 4928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 377008 kB' 'Slab: 712156 kB' 'SReclaimable: 377008 kB' 'SUnreclaim: 335148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.497 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15397920 kB' 'MemUsed: 12305188 kB' 'SwapCached: 0 kB' 'Active: 5426584 kB' 'Inactive: 3287880 kB' 'Active(anon): 5225668 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3287880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8369432 kB' 'Mapped: 131324 kB' 'AnonPages: 345072 kB' 'Shmem: 4880636 kB' 'KernelStack: 10552 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183500 kB' 'Slab: 555792 kB' 'SReclaimable: 183500 kB' 'SUnreclaim: 372292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.498 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:32.499 node0=512 expecting 512 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.499 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.500 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.500 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:32.500 node1=512 expecting 512 00:03:32.500 06:52:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:32.500 00:03:32.500 real 0m3.670s 00:03:32.500 user 0m1.205s 00:03:32.500 sys 0m2.318s 00:03:32.500 06:52:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.500 06:52:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.500 ************************************ 00:03:32.500 END TEST even_2G_alloc 00:03:32.500 ************************************ 00:03:32.760 06:52:47 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:32.760 06:52:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.760 06:52:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.760 06:52:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.760 ************************************ 00:03:32.760 START TEST odd_alloc 00:03:32.760 ************************************ 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.760 06:52:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:36.963 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.963 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38836964 kB' 'MemAvailable: 42927048 kB' 'Buffers: 4096 kB' 'Cached: 14966248 kB' 'SwapCached: 0 kB' 'Active: 11795560 kB' 'Inactive: 3698840 kB' 'Active(anon): 11317332 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526940 kB' 'Mapped: 203236 kB' 'Shmem: 10793276 kB' 'KReclaimable: 560508 kB' 'Slab: 1267752 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707244 kB' 'KernelStack: 22528 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12787672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220144 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38837304 kB' 'MemAvailable: 42927388 kB' 'Buffers: 4096 kB' 'Cached: 14966252 kB' 'SwapCached: 0 kB' 'Active: 11796160 kB' 'Inactive: 3698840 kB' 'Active(anon): 11317932 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527512 kB' 'Mapped: 203736 kB' 'Shmem: 10793280 kB' 'KReclaimable: 560508 kB' 'Slab: 1267752 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707244 kB' 'KernelStack: 22496 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12788912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220128 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38833736 kB' 'MemAvailable: 42923820 kB' 'Buffers: 4096 kB' 'Cached: 14966252 kB' 'SwapCached: 0 kB' 'Active: 11797356 kB' 'Inactive: 3698840 kB' 'Active(anon): 11319128 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529264 kB' 'Mapped: 203660 kB' 'Shmem: 10793280 kB' 'KReclaimable: 560508 kB' 'Slab: 1267776 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707268 kB' 'KernelStack: 22512 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12790780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:36.969 nr_hugepages=1025 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.969 resv_hugepages=0 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.969 surplus_hugepages=0 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.969 anon_hugepages=0 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38830324 kB' 'MemAvailable: 42920408 kB' 'Buffers: 4096 kB' 'Cached: 14966288 kB' 'SwapCached: 0 kB' 'Active: 11800908 kB' 'Inactive: 3698840 kB' 'Active(anon): 11322680 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532696 kB' 'Mapped: 204072 kB' 'Shmem: 10793316 kB' 'KReclaimable: 560508 kB' 'Slab: 1267776 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707268 kB' 'KernelStack: 22496 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12793852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220116 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 23435440 kB' 'MemUsed: 9156644 kB' 'SwapCached: 0 kB' 'Active: 6370480 kB' 'Inactive: 410960 kB' 'Active(anon): 6093168 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6600924 kB' 'Mapped: 71972 kB' 'AnonPages: 183704 kB' 'Shmem: 5912652 kB' 'KernelStack: 12040 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 377008 kB' 'Slab: 711788 kB' 'SReclaimable: 377008 kB' 'SUnreclaim: 334780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15398564 kB' 'MemUsed: 12304544 kB' 'SwapCached: 0 kB' 'Active: 5427200 kB' 'Inactive: 3287880 kB' 'Active(anon): 5226284 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3287880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8369460 kB' 'Mapped: 131840 kB' 'AnonPages: 345720 kB' 'Shmem: 4880664 kB' 'KernelStack: 10424 kB' 'PageTables: 3484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183500 kB' 'Slab: 555988 kB' 'SReclaimable: 183500 kB' 'SUnreclaim: 372488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.973 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:36.974 node0=512 expecting 513 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:36.974 node1=513 expecting 512 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:36.974 00:03:36.974 real 0m3.843s 00:03:36.974 user 0m1.303s 00:03:36.974 sys 0m2.566s 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.974 06:52:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.974 ************************************ 00:03:36.974 END TEST odd_alloc 00:03:36.974 ************************************ 00:03:36.974 06:52:51 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:36.974 06:52:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.974 06:52:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.974 06:52:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.974 ************************************ 00:03:36.974 START TEST custom_alloc 00:03:36.974 ************************************ 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.974 06:52:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:40.270 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.270 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.567 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37790744 kB' 'MemAvailable: 41880828 kB' 'Buffers: 4096 kB' 'Cached: 14966412 kB' 'SwapCached: 0 kB' 'Active: 11796660 kB' 'Inactive: 3698840 kB' 'Active(anon): 11318432 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528240 kB' 'Mapped: 203180 kB' 'Shmem: 10793440 kB' 'KReclaimable: 560508 kB' 'Slab: 1268264 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707756 kB' 'KernelStack: 22576 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12790660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220400 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.568 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37790656 kB' 'MemAvailable: 41880740 kB' 'Buffers: 4096 kB' 'Cached: 14966412 kB' 'SwapCached: 0 kB' 'Active: 11796544 kB' 'Inactive: 3698840 kB' 'Active(anon): 11318316 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528136 kB' 'Mapped: 203188 kB' 'Shmem: 10793440 kB' 'KReclaimable: 560508 kB' 'Slab: 1268280 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707772 kB' 'KernelStack: 22528 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12790680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220320 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.569 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.570 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37790884 kB' 'MemAvailable: 41880968 kB' 'Buffers: 4096 kB' 'Cached: 14966432 kB' 'SwapCached: 0 kB' 'Active: 11796208 kB' 'Inactive: 3698840 kB' 'Active(anon): 11317980 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527708 kB' 'Mapped: 203180 kB' 'Shmem: 10793460 kB' 'KReclaimable: 560508 kB' 'Slab: 1268280 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707772 kB' 'KernelStack: 22560 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12790588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220288 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.571 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.572 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:40.573 nr_hugepages=1536 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.573 resv_hugepages=0 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.573 surplus_hugepages=0 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.573 anon_hugepages=0 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.573 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37790960 kB' 'MemAvailable: 41881044 kB' 'Buffers: 4096 kB' 'Cached: 14966436 kB' 'SwapCached: 0 kB' 'Active: 11796208 kB' 'Inactive: 3698840 kB' 'Active(anon): 11317980 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527820 kB' 'Mapped: 203168 kB' 'Shmem: 10793464 kB' 'KReclaimable: 560508 kB' 'Slab: 1268248 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707740 kB' 'KernelStack: 22432 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12788004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220192 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.574 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.575 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 23428556 kB' 'MemUsed: 9163528 kB' 'SwapCached: 0 kB' 'Active: 6371504 kB' 'Inactive: 410960 kB' 'Active(anon): 6094192 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6601068 kB' 'Mapped: 71820 kB' 'AnonPages: 184592 kB' 'Shmem: 5912796 kB' 'KernelStack: 12040 kB' 'PageTables: 5052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 377008 kB' 'Slab: 712408 kB' 'SReclaimable: 377008 kB' 'SUnreclaim: 335400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.576 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14362404 kB' 'MemUsed: 13340704 kB' 'SwapCached: 0 kB' 'Active: 5424984 kB' 'Inactive: 3287880 kB' 'Active(anon): 5224068 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3287880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8369476 kB' 'Mapped: 131348 kB' 'AnonPages: 343456 kB' 'Shmem: 4880680 kB' 'KernelStack: 10472 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183500 kB' 'Slab: 555840 kB' 'SReclaimable: 183500 kB' 'SUnreclaim: 372340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.577 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.578 node0=512 expecting 512 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.578 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.579 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:40.579 node1=1024 expecting 1024 00:03:40.579 06:52:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:40.579 00:03:40.579 real 0m4.087s 00:03:40.579 user 0m1.494s 00:03:40.579 sys 0m2.660s 00:03:40.579 06:52:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.579 06:52:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.579 ************************************ 00:03:40.579 END TEST custom_alloc 00:03:40.579 ************************************ 00:03:40.838 06:52:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:40.838 06:52:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.838 06:52:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.838 06:52:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.838 ************************************ 00:03:40.838 START TEST no_shrink_alloc 00:03:40.838 ************************************ 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.838 06:52:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:44.126 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.126 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.126 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.126 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.126 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.126 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.126 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.126 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.126 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.126 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38818728 kB' 'MemAvailable: 42908812 kB' 'Buffers: 4096 kB' 'Cached: 14966580 kB' 'SwapCached: 0 kB' 'Active: 11797952 kB' 'Inactive: 3698840 kB' 'Active(anon): 11319724 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528992 kB' 'Mapped: 203260 kB' 'Shmem: 10793608 kB' 'KReclaimable: 560508 kB' 'Slab: 1267908 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707400 kB' 'KernelStack: 22560 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12789156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220144 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.391 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38819328 kB' 'MemAvailable: 42909412 kB' 'Buffers: 4096 kB' 'Cached: 14966584 kB' 'SwapCached: 0 kB' 'Active: 11796912 kB' 'Inactive: 3698840 kB' 'Active(anon): 11318684 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528404 kB' 'Mapped: 203180 kB' 'Shmem: 10793612 kB' 'KReclaimable: 560508 kB' 'Slab: 1267876 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707368 kB' 'KernelStack: 22512 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12789172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38820408 kB' 'MemAvailable: 42910492 kB' 'Buffers: 4096 kB' 'Cached: 14966600 kB' 'SwapCached: 0 kB' 'Active: 11796952 kB' 'Inactive: 3698840 kB' 'Active(anon): 11318724 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528408 kB' 'Mapped: 203180 kB' 'Shmem: 10793628 kB' 'KReclaimable: 560508 kB' 'Slab: 1267876 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707368 kB' 'KernelStack: 22512 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12789192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.396 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.397 nr_hugepages=1024 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.397 resv_hugepages=0 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.397 surplus_hugepages=0 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.397 anon_hugepages=0 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38820776 kB' 'MemAvailable: 42910860 kB' 'Buffers: 4096 kB' 'Cached: 14966624 kB' 'SwapCached: 0 kB' 'Active: 11796932 kB' 'Inactive: 3698840 kB' 'Active(anon): 11318704 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528404 kB' 'Mapped: 203180 kB' 'Shmem: 10793652 kB' 'KReclaimable: 560508 kB' 'Slab: 1267876 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707368 kB' 'KernelStack: 22512 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12789216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220128 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.397 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.398 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22361932 kB' 'MemUsed: 10230152 kB' 'SwapCached: 0 kB' 'Active: 6370920 kB' 'Inactive: 410960 kB' 'Active(anon): 6093608 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6601240 kB' 'Mapped: 71820 kB' 'AnonPages: 183828 kB' 'Shmem: 5912968 kB' 'KernelStack: 12040 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 377008 kB' 'Slab: 712036 kB' 'SReclaimable: 377008 kB' 'SUnreclaim: 335028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.399 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.400 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.401 node0=1024 expecting 1024 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.401 06:52:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:48.600 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.600 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.600 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:48.600 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:48.600 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.600 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.600 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.600 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.600 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.600 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38823052 kB' 'MemAvailable: 42913136 kB' 'Buffers: 4096 kB' 'Cached: 14966724 kB' 'SwapCached: 0 kB' 'Active: 11801432 kB' 'Inactive: 3698840 kB' 'Active(anon): 11323204 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532136 kB' 'Mapped: 203704 kB' 'Shmem: 10793752 kB' 'KReclaimable: 560508 kB' 'Slab: 1268172 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707664 kB' 'KernelStack: 22640 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12796000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220400 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38820040 kB' 'MemAvailable: 42910124 kB' 'Buffers: 4096 kB' 'Cached: 14966728 kB' 'SwapCached: 0 kB' 'Active: 11804236 kB' 'Inactive: 3698840 kB' 'Active(anon): 11326008 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535532 kB' 'Mapped: 204052 kB' 'Shmem: 10793756 kB' 'KReclaimable: 560508 kB' 'Slab: 1268208 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707700 kB' 'KernelStack: 22560 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12798804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220308 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.602 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.603 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.604 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38822200 kB' 'MemAvailable: 42912284 kB' 'Buffers: 4096 kB' 'Cached: 14966744 kB' 'SwapCached: 0 kB' 'Active: 11800708 kB' 'Inactive: 3698840 kB' 'Active(anon): 11322480 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532008 kB' 'Mapped: 204044 kB' 'Shmem: 10793772 kB' 'KReclaimable: 560508 kB' 'Slab: 1268208 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707700 kB' 'KernelStack: 22624 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12795248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220320 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.605 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.606 nr_hugepages=1024 00:03:48.606 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.606 resv_hugepages=0 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.607 surplus_hugepages=0 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.607 anon_hugepages=0 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38816504 kB' 'MemAvailable: 42906588 kB' 'Buffers: 4096 kB' 'Cached: 14966768 kB' 'SwapCached: 0 kB' 'Active: 11804144 kB' 'Inactive: 3698840 kB' 'Active(anon): 11325916 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3698840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535364 kB' 'Mapped: 203704 kB' 'Shmem: 10793796 kB' 'KReclaimable: 560508 kB' 'Slab: 1268208 kB' 'SReclaimable: 560508 kB' 'SUnreclaim: 707700 kB' 'KernelStack: 22464 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12797236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220244 kB' 'VmallocChunk: 0 kB' 'Percpu: 113792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4248948 kB' 'DirectMap2M: 43671552 kB' 'DirectMap1G: 20971520 kB' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.607 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.608 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22365828 kB' 'MemUsed: 10226256 kB' 'SwapCached: 0 kB' 'Active: 6370764 kB' 'Inactive: 410960 kB' 'Active(anon): 6093452 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 410960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6601344 kB' 'Mapped: 71988 kB' 'AnonPages: 183568 kB' 'Shmem: 5913072 kB' 'KernelStack: 12024 kB' 'PageTables: 4984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 377008 kB' 'Slab: 712252 kB' 'SReclaimable: 377008 kB' 'SUnreclaim: 335244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.609 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.610 node0=1024 expecting 1024 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.610 00:03:48.610 real 0m7.454s 00:03:48.610 user 0m2.459s 00:03:48.610 sys 0m4.887s 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.610 06:53:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.610 ************************************ 00:03:48.610 END TEST no_shrink_alloc 00:03:48.610 ************************************ 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.610 06:53:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.610 00:03:48.610 real 0m29.531s 00:03:48.610 user 0m9.602s 00:03:48.610 sys 0m17.953s 00:03:48.610 06:53:02 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.610 06:53:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.610 ************************************ 00:03:48.610 END TEST hugepages 00:03:48.610 ************************************ 00:03:48.611 06:53:02 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:48.611 06:53:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.611 06:53:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.611 06:53:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.611 ************************************ 00:03:48.611 START TEST driver 00:03:48.611 ************************************ 00:03:48.611 06:53:02 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:48.611 * Looking for test storage... 00:03:48.611 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:48.611 06:53:02 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:48.611 06:53:02 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.611 06:53:02 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.883 06:53:07 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:53.883 06:53:07 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.883 06:53:07 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.883 06:53:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:53.883 ************************************ 00:03:53.883 START TEST guess_driver 00:03:53.883 ************************************ 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:53.883 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:53.883 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:53.883 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:53.883 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:53.883 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:53.883 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:53.883 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:53.883 Looking for driver=vfio-pci 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.883 06:53:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:57.176 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.176 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.177 06:53:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.084 06:53:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.084 06:53:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.084 06:53:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.343 06:53:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:59.343 06:53:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:59.343 06:53:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.343 06:53:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.653 00:04:04.653 real 0m11.344s 00:04:04.653 user 0m2.875s 00:04:04.653 sys 0m5.715s 00:04:04.653 06:53:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.653 06:53:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.653 ************************************ 00:04:04.653 END TEST guess_driver 00:04:04.653 ************************************ 00:04:04.653 00:04:04.653 real 0m16.293s 00:04:04.653 user 0m4.112s 00:04:04.653 sys 0m8.400s 00:04:04.653 06:53:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.653 06:53:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.653 ************************************ 00:04:04.653 END TEST driver 00:04:04.653 ************************************ 00:04:04.653 06:53:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:04.653 06:53:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.653 06:53:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.653 06:53:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:04.653 ************************************ 00:04:04.653 START TEST devices 00:04:04.653 ************************************ 00:04:04.653 06:53:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:04.912 * Looking for test storage... 00:04:04.912 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:04.912 06:53:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:04.912 06:53:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:04.912 06:53:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.912 06:53:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:10.190 06:53:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:10.190 06:53:23 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:10.190 No valid GPT data, bailing 00:04:10.190 06:53:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.190 06:53:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:10.190 06:53:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:10.190 06:53:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:10.190 06:53:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:10.190 06:53:23 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:10.190 06:53:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.190 06:53:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.190 ************************************ 00:04:10.190 START TEST nvme_mount 00:04:10.190 ************************************ 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:10.190 06:53:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:10.450 Creating new GPT entries in memory. 00:04:10.450 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.450 other utilities. 00:04:10.450 06:53:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.450 06:53:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.450 06:53:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.450 06:53:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.450 06:53:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:11.388 Creating new GPT entries in memory. 00:04:11.388 The operation has completed successfully. 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1415528 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.388 06:53:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.584 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.585 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.585 06:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.844 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:15.844 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:15.844 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:15.844 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:15.844 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:15.844 06:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:15.844 06:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.844 06:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:15.844 06:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:15.844 06:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.845 06:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:20.040 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:20.041 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:20.041 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.041 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.041 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.041 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.041 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:20.041 06:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.041 06:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.041 06:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.237 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.237 00:04:24.237 real 0m14.495s 00:04:24.237 user 0m4.388s 00:04:24.237 sys 0m8.054s 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.237 06:53:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.237 ************************************ 00:04:24.237 END TEST nvme_mount 00:04:24.237 ************************************ 00:04:24.237 06:53:38 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:24.237 06:53:38 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.237 06:53:38 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.237 06:53:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.237 ************************************ 00:04:24.237 START TEST dm_mount 00:04:24.237 ************************************ 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.237 06:53:38 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:25.176 Creating new GPT entries in memory. 00:04:25.176 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.176 other utilities. 00:04:25.176 06:53:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.176 06:53:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.176 06:53:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.176 06:53:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.176 06:53:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:26.116 Creating new GPT entries in memory. 00:04:26.116 The operation has completed successfully. 00:04:26.116 06:53:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.116 06:53:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.116 06:53:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.116 06:53:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.116 06:53:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:27.053 The operation has completed successfully. 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1420723 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.053 06:53:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:30.400 06:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.400 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.400 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:30.400 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:30.400 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:30.400 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:30.400 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.660 06:53:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:33.952 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:34.211 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:34.211 00:04:34.211 real 0m10.357s 00:04:34.211 user 0m2.395s 00:04:34.211 sys 0m4.883s 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.211 06:53:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:34.211 ************************************ 00:04:34.211 END TEST dm_mount 00:04:34.211 ************************************ 00:04:34.211 06:53:48 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:34.211 06:53:48 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:34.211 06:53:48 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.211 06:53:48 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.211 06:53:48 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:34.211 06:53:48 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.211 06:53:48 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.470 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:34.470 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:34.470 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.470 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:34.471 06:53:49 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:34.471 06:53:49 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:34.730 06:53:49 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.730 06:53:49 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.730 06:53:49 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:34.730 06:53:49 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.730 06:53:49 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:34.730 00:04:34.730 real 0m29.925s 00:04:34.730 user 0m8.550s 00:04:34.730 sys 0m16.182s 00:04:34.730 06:53:49 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.730 06:53:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:34.730 ************************************ 00:04:34.730 END TEST devices 00:04:34.730 ************************************ 00:04:34.730 00:04:34.730 real 1m44.862s 00:04:34.730 user 0m31.311s 00:04:34.730 sys 1m0.399s 00:04:34.730 06:53:49 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.730 06:53:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.730 ************************************ 00:04:34.730 END TEST setup.sh 00:04:34.730 ************************************ 00:04:34.730 06:53:49 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:38.923 Hugepages 00:04:38.923 node hugesize free / total 00:04:38.923 node0 1048576kB 0 / 0 00:04:38.923 node0 2048kB 2048 / 2048 00:04:38.923 node1 1048576kB 0 / 0 00:04:38.923 node1 2048kB 0 / 0 00:04:38.923 00:04:38.923 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.923 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:38.923 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:38.923 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:38.923 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:38.923 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:38.923 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:38.923 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:38.923 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:38.923 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:38.923 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:38.923 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:38.923 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:38.923 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:38.923 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:38.923 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:38.923 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:38.923 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:38.923 06:53:53 -- spdk/autotest.sh@130 -- # uname -s 00:04:38.923 06:53:53 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:38.923 06:53:53 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:38.923 06:53:53 -- common/autotest_common.sh@1529 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:43.114 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.114 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:45.020 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:45.020 06:53:59 -- common/autotest_common.sh@1530 -- # sleep 1 00:04:45.958 06:54:00 -- common/autotest_common.sh@1531 -- # bdfs=() 00:04:45.958 06:54:00 -- common/autotest_common.sh@1531 -- # local bdfs 00:04:45.958 06:54:00 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.958 06:54:00 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:04:45.958 06:54:00 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:45.958 06:54:00 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:45.958 06:54:00 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.958 06:54:00 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:45.958 06:54:00 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.958 06:54:00 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:45.958 06:54:00 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:d8:00.0 00:04:45.958 06:54:00 -- common/autotest_common.sh@1534 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.211 Waiting for block devices as requested 00:04:50.211 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:50.211 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:50.211 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:50.211 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:50.211 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:50.211 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:50.211 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:50.211 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:50.211 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:50.211 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:50.503 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:50.503 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:50.503 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:50.762 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:50.762 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:50.762 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:51.022 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:51.022 06:54:05 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:51.022 06:54:05 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:51.022 06:54:05 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:04:51.022 06:54:05 -- common/autotest_common.sh@1500 -- # grep 0000:d8:00.0/nvme/nvme 00:04:51.022 06:54:05 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:51.022 06:54:05 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:51.022 06:54:05 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:51.022 06:54:05 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:04:51.022 06:54:05 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:04:51.022 06:54:05 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:04:51.022 06:54:05 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:04:51.022 06:54:05 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:51.022 06:54:05 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:51.022 06:54:05 -- common/autotest_common.sh@1543 -- # oacs=' 0xe' 00:04:51.022 06:54:05 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:51.022 06:54:05 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:51.022 06:54:05 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:04:51.022 06:54:05 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:51.022 06:54:05 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:51.022 06:54:05 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:51.022 06:54:05 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:51.022 06:54:05 -- common/autotest_common.sh@1555 -- # continue 00:04:51.022 06:54:05 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:51.022 06:54:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.022 06:54:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.281 06:54:05 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:51.281 06:54:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.281 06:54:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.281 06:54:05 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:55.475 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:55.475 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.383 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.383 06:54:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:57.383 06:54:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.383 06:54:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.383 06:54:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:57.383 06:54:11 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:04:57.383 06:54:11 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.383 06:54:11 -- common/autotest_common.sh@1575 -- # bdfs=() 00:04:57.383 06:54:11 -- common/autotest_common.sh@1575 -- # local bdfs 00:04:57.383 06:54:11 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:04:57.383 06:54:11 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:57.383 06:54:11 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:57.383 06:54:11 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.383 06:54:11 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:57.383 06:54:11 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:57.383 06:54:11 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:57.383 06:54:11 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:d8:00.0 00:04:57.383 06:54:11 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:57.383 06:54:11 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:57.383 06:54:11 -- common/autotest_common.sh@1578 -- # device=0x0a54 00:04:57.383 06:54:11 -- common/autotest_common.sh@1579 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:57.383 06:54:11 -- common/autotest_common.sh@1580 -- # bdfs+=($bdf) 00:04:57.383 06:54:11 -- common/autotest_common.sh@1584 -- # printf '%s\n' 0000:d8:00.0 00:04:57.383 06:54:11 -- common/autotest_common.sh@1590 -- # [[ -z 0000:d8:00.0 ]] 00:04:57.383 06:54:11 -- common/autotest_common.sh@1594 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.383 06:54:11 -- common/autotest_common.sh@1595 -- # spdk_tgt_pid=1432333 00:04:57.383 06:54:11 -- common/autotest_common.sh@1596 -- # waitforlisten 1432333 00:04:57.383 06:54:11 -- common/autotest_common.sh@829 -- # '[' -z 1432333 ']' 00:04:57.383 06:54:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.383 06:54:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.383 06:54:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.383 06:54:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.383 06:54:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.642 [2024-07-24 06:54:12.057411] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:04:57.642 [2024-07-24 06:54:12.057513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432333 ] 00:04:57.642 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.642 [2024-07-24 06:54:12.205927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.901 [2024-07-24 06:54:12.414422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.837 06:54:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.838 06:54:13 -- common/autotest_common.sh@862 -- # return 0 00:04:58.838 06:54:13 -- common/autotest_common.sh@1598 -- # bdf_id=0 00:04:58.838 06:54:13 -- common/autotest_common.sh@1599 -- # for bdf in "${bdfs[@]}" 00:04:58.838 06:54:13 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:02.119 nvme0n1 00:05:02.119 06:54:16 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:02.119 [2024-07-24 06:54:16.484471] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:02.119 request: 00:05:02.119 { 00:05:02.119 "nvme_ctrlr_name": "nvme0", 00:05:02.119 "password": "test", 00:05:02.119 "method": "bdev_nvme_opal_revert", 00:05:02.119 "req_id": 1 00:05:02.119 } 00:05:02.119 Got JSON-RPC error response 00:05:02.119 response: 00:05:02.119 { 00:05:02.119 "code": -32602, 00:05:02.119 "message": "Invalid parameters" 00:05:02.119 } 00:05:02.119 06:54:16 -- common/autotest_common.sh@1602 -- # true 00:05:02.119 06:54:16 -- common/autotest_common.sh@1603 -- # (( ++bdf_id )) 00:05:02.119 06:54:16 -- common/autotest_common.sh@1606 -- # killprocess 1432333 00:05:02.119 06:54:16 -- common/autotest_common.sh@948 -- # '[' -z 1432333 ']' 00:05:02.119 06:54:16 -- common/autotest_common.sh@952 -- # kill -0 1432333 00:05:02.119 06:54:16 -- common/autotest_common.sh@953 -- # uname 00:05:02.119 06:54:16 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.119 06:54:16 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1432333 00:05:02.119 06:54:16 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.119 06:54:16 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.119 06:54:16 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1432333' 00:05:02.119 killing process with pid 1432333 00:05:02.119 06:54:16 -- common/autotest_common.sh@967 -- # kill 1432333 00:05:02.119 06:54:16 -- common/autotest_common.sh@972 -- # wait 1432333 00:05:07.392 06:54:21 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:07.392 06:54:21 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:07.392 06:54:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.392 06:54:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.392 06:54:21 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:07.392 06:54:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.392 06:54:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.392 06:54:21 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:07.392 06:54:21 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:07.392 06:54:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.392 06:54:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.392 06:54:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.392 ************************************ 00:05:07.392 START TEST env 00:05:07.392 ************************************ 00:05:07.392 06:54:21 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:07.392 * Looking for test storage... 00:05:07.392 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:07.392 06:54:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.392 06:54:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.392 06:54:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.392 06:54:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.392 ************************************ 00:05:07.392 START TEST env_memory 00:05:07.392 ************************************ 00:05:07.392 06:54:21 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.392 00:05:07.392 00:05:07.392 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.392 http://cunit.sourceforge.net/ 00:05:07.392 00:05:07.392 00:05:07.392 Suite: memory 00:05:07.392 Test: alloc and free memory map ...[2024-07-24 06:54:21.212764] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:07.392 passed 00:05:07.392 Test: mem map translation ...[2024-07-24 06:54:21.250220] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:07.392 [2024-07-24 06:54:21.250249] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:07.392 [2024-07-24 06:54:21.250306] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:07.392 [2024-07-24 06:54:21.250326] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:07.392 passed 00:05:07.392 Test: mem map registration ...[2024-07-24 06:54:21.306633] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:07.392 [2024-07-24 06:54:21.306660] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:07.392 passed 00:05:07.392 Test: mem map adjacent registrations ...passed 00:05:07.392 00:05:07.392 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.392 suites 1 1 n/a 0 0 00:05:07.392 tests 4 4 4 0 0 00:05:07.392 asserts 152 152 152 0 n/a 00:05:07.392 00:05:07.392 Elapsed time = 0.208 seconds 00:05:07.392 00:05:07.392 real 0m0.249s 00:05:07.392 user 0m0.221s 00:05:07.392 sys 0m0.027s 00:05:07.392 06:54:21 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.392 06:54:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:07.392 ************************************ 00:05:07.393 END TEST env_memory 00:05:07.393 ************************************ 00:05:07.393 06:54:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:07.393 06:54:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.393 06:54:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.393 06:54:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.393 ************************************ 00:05:07.393 START TEST env_vtophys 00:05:07.393 ************************************ 00:05:07.393 06:54:21 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:07.393 EAL: lib.eal log level changed from notice to debug 00:05:07.393 EAL: Detected lcore 0 as core 0 on socket 0 00:05:07.393 EAL: Detected lcore 1 as core 1 on socket 0 00:05:07.393 EAL: Detected lcore 2 as core 2 on socket 0 00:05:07.393 EAL: Detected lcore 3 as core 3 on socket 0 00:05:07.393 EAL: Detected lcore 4 as core 4 on socket 0 00:05:07.393 EAL: Detected lcore 5 as core 5 on socket 0 00:05:07.393 EAL: Detected lcore 6 as core 6 on socket 0 00:05:07.393 EAL: Detected lcore 7 as core 8 on socket 0 00:05:07.393 EAL: Detected lcore 8 as core 9 on socket 0 00:05:07.393 EAL: Detected lcore 9 as core 10 on socket 0 00:05:07.393 EAL: Detected lcore 10 as core 11 on socket 0 00:05:07.393 EAL: Detected lcore 11 as core 12 on socket 0 00:05:07.393 EAL: Detected lcore 12 as core 13 on socket 0 00:05:07.393 EAL: Detected lcore 13 as core 14 on socket 0 00:05:07.393 EAL: Detected lcore 14 as core 16 on socket 0 00:05:07.393 EAL: Detected lcore 15 as core 17 on socket 0 00:05:07.393 EAL: Detected lcore 16 as core 18 on socket 0 00:05:07.393 EAL: Detected lcore 17 as core 19 on socket 0 00:05:07.393 EAL: Detected lcore 18 as core 20 on socket 0 00:05:07.393 EAL: Detected lcore 19 as core 21 on socket 0 00:05:07.393 EAL: Detected lcore 20 as core 22 on socket 0 00:05:07.393 EAL: Detected lcore 21 as core 24 on socket 0 00:05:07.393 EAL: Detected lcore 22 as core 25 on socket 0 00:05:07.393 EAL: Detected lcore 23 as core 26 on socket 0 00:05:07.393 EAL: Detected lcore 24 as core 27 on socket 0 00:05:07.393 EAL: Detected lcore 25 as core 28 on socket 0 00:05:07.393 EAL: Detected lcore 26 as core 29 on socket 0 00:05:07.393 EAL: Detected lcore 27 as core 30 on socket 0 00:05:07.393 EAL: Detected lcore 28 as core 0 on socket 1 00:05:07.393 EAL: Detected lcore 29 as core 1 on socket 1 00:05:07.393 EAL: Detected lcore 30 as core 2 on socket 1 00:05:07.393 EAL: Detected lcore 31 as core 3 on socket 1 00:05:07.393 EAL: Detected lcore 32 as core 4 on socket 1 00:05:07.393 EAL: Detected lcore 33 as core 5 on socket 1 00:05:07.393 EAL: Detected lcore 34 as core 6 on socket 1 00:05:07.393 EAL: Detected lcore 35 as core 8 on socket 1 00:05:07.393 EAL: Detected lcore 36 as core 9 on socket 1 00:05:07.393 EAL: Detected lcore 37 as core 10 on socket 1 00:05:07.393 EAL: Detected lcore 38 as core 11 on socket 1 00:05:07.393 EAL: Detected lcore 39 as core 12 on socket 1 00:05:07.393 EAL: Detected lcore 40 as core 13 on socket 1 00:05:07.393 EAL: Detected lcore 41 as core 14 on socket 1 00:05:07.393 EAL: Detected lcore 42 as core 16 on socket 1 00:05:07.393 EAL: Detected lcore 43 as core 17 on socket 1 00:05:07.393 EAL: Detected lcore 44 as core 18 on socket 1 00:05:07.393 EAL: Detected lcore 45 as core 19 on socket 1 00:05:07.393 EAL: Detected lcore 46 as core 20 on socket 1 00:05:07.393 EAL: Detected lcore 47 as core 21 on socket 1 00:05:07.393 EAL: Detected lcore 48 as core 22 on socket 1 00:05:07.393 EAL: Detected lcore 49 as core 24 on socket 1 00:05:07.393 EAL: Detected lcore 50 as core 25 on socket 1 00:05:07.393 EAL: Detected lcore 51 as core 26 on socket 1 00:05:07.393 EAL: Detected lcore 52 as core 27 on socket 1 00:05:07.393 EAL: Detected lcore 53 as core 28 on socket 1 00:05:07.393 EAL: Detected lcore 54 as core 29 on socket 1 00:05:07.393 EAL: Detected lcore 55 as core 30 on socket 1 00:05:07.393 EAL: Detected lcore 56 as core 0 on socket 0 00:05:07.393 EAL: Detected lcore 57 as core 1 on socket 0 00:05:07.393 EAL: Detected lcore 58 as core 2 on socket 0 00:05:07.393 EAL: Detected lcore 59 as core 3 on socket 0 00:05:07.393 EAL: Detected lcore 60 as core 4 on socket 0 00:05:07.393 EAL: Detected lcore 61 as core 5 on socket 0 00:05:07.393 EAL: Detected lcore 62 as core 6 on socket 0 00:05:07.393 EAL: Detected lcore 63 as core 8 on socket 0 00:05:07.393 EAL: Detected lcore 64 as core 9 on socket 0 00:05:07.393 EAL: Detected lcore 65 as core 10 on socket 0 00:05:07.393 EAL: Detected lcore 66 as core 11 on socket 0 00:05:07.393 EAL: Detected lcore 67 as core 12 on socket 0 00:05:07.393 EAL: Detected lcore 68 as core 13 on socket 0 00:05:07.393 EAL: Detected lcore 69 as core 14 on socket 0 00:05:07.393 EAL: Detected lcore 70 as core 16 on socket 0 00:05:07.393 EAL: Detected lcore 71 as core 17 on socket 0 00:05:07.393 EAL: Detected lcore 72 as core 18 on socket 0 00:05:07.393 EAL: Detected lcore 73 as core 19 on socket 0 00:05:07.393 EAL: Detected lcore 74 as core 20 on socket 0 00:05:07.393 EAL: Detected lcore 75 as core 21 on socket 0 00:05:07.393 EAL: Detected lcore 76 as core 22 on socket 0 00:05:07.393 EAL: Detected lcore 77 as core 24 on socket 0 00:05:07.393 EAL: Detected lcore 78 as core 25 on socket 0 00:05:07.393 EAL: Detected lcore 79 as core 26 on socket 0 00:05:07.393 EAL: Detected lcore 80 as core 27 on socket 0 00:05:07.393 EAL: Detected lcore 81 as core 28 on socket 0 00:05:07.393 EAL: Detected lcore 82 as core 29 on socket 0 00:05:07.393 EAL: Detected lcore 83 as core 30 on socket 0 00:05:07.393 EAL: Detected lcore 84 as core 0 on socket 1 00:05:07.393 EAL: Detected lcore 85 as core 1 on socket 1 00:05:07.393 EAL: Detected lcore 86 as core 2 on socket 1 00:05:07.393 EAL: Detected lcore 87 as core 3 on socket 1 00:05:07.393 EAL: Detected lcore 88 as core 4 on socket 1 00:05:07.393 EAL: Detected lcore 89 as core 5 on socket 1 00:05:07.393 EAL: Detected lcore 90 as core 6 on socket 1 00:05:07.393 EAL: Detected lcore 91 as core 8 on socket 1 00:05:07.393 EAL: Detected lcore 92 as core 9 on socket 1 00:05:07.393 EAL: Detected lcore 93 as core 10 on socket 1 00:05:07.393 EAL: Detected lcore 94 as core 11 on socket 1 00:05:07.393 EAL: Detected lcore 95 as core 12 on socket 1 00:05:07.393 EAL: Detected lcore 96 as core 13 on socket 1 00:05:07.393 EAL: Detected lcore 97 as core 14 on socket 1 00:05:07.393 EAL: Detected lcore 98 as core 16 on socket 1 00:05:07.393 EAL: Detected lcore 99 as core 17 on socket 1 00:05:07.393 EAL: Detected lcore 100 as core 18 on socket 1 00:05:07.393 EAL: Detected lcore 101 as core 19 on socket 1 00:05:07.393 EAL: Detected lcore 102 as core 20 on socket 1 00:05:07.393 EAL: Detected lcore 103 as core 21 on socket 1 00:05:07.393 EAL: Detected lcore 104 as core 22 on socket 1 00:05:07.393 EAL: Detected lcore 105 as core 24 on socket 1 00:05:07.393 EAL: Detected lcore 106 as core 25 on socket 1 00:05:07.393 EAL: Detected lcore 107 as core 26 on socket 1 00:05:07.393 EAL: Detected lcore 108 as core 27 on socket 1 00:05:07.393 EAL: Detected lcore 109 as core 28 on socket 1 00:05:07.393 EAL: Detected lcore 110 as core 29 on socket 1 00:05:07.393 EAL: Detected lcore 111 as core 30 on socket 1 00:05:07.393 EAL: Maximum logical cores by configuration: 128 00:05:07.393 EAL: Detected CPU lcores: 112 00:05:07.393 EAL: Detected NUMA nodes: 2 00:05:07.393 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:07.393 EAL: Detected shared linkage of DPDK 00:05:07.393 EAL: No shared files mode enabled, IPC will be disabled 00:05:07.393 EAL: Bus pci wants IOVA as 'DC' 00:05:07.393 EAL: Buses did not request a specific IOVA mode. 00:05:07.393 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:07.393 EAL: Selected IOVA mode 'VA' 00:05:07.393 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.393 EAL: Probing VFIO support... 00:05:07.393 EAL: IOMMU type 1 (Type 1) is supported 00:05:07.393 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:07.393 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:07.393 EAL: VFIO support initialized 00:05:07.393 EAL: Ask a virtual area of 0x2e000 bytes 00:05:07.393 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:07.393 EAL: Setting up physically contiguous memory... 00:05:07.393 EAL: Setting maximum number of open files to 524288 00:05:07.393 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:07.393 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:07.393 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:07.393 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.393 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:07.393 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.393 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.393 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:07.393 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:07.393 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.393 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:07.393 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.393 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.393 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:07.393 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:07.393 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.393 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:07.393 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.393 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.393 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:07.393 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:07.393 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.393 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:07.393 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.393 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.393 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:07.393 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:07.393 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:07.393 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.393 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:07.393 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.393 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.393 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:07.393 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:07.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.394 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:07.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.394 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:07.394 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:07.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.394 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:07.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.394 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:07.394 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:07.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.394 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:07.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.394 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:07.394 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:07.394 EAL: Hugepages will be freed exactly as allocated. 00:05:07.394 EAL: No shared files mode enabled, IPC is disabled 00:05:07.394 EAL: No shared files mode enabled, IPC is disabled 00:05:07.394 EAL: TSC frequency is ~2500000 KHz 00:05:07.394 EAL: Main lcore 0 is ready (tid=7f6e1b6aaa40;cpuset=[0]) 00:05:07.394 EAL: Trying to obtain current memory policy. 00:05:07.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.394 EAL: Restoring previous memory policy: 0 00:05:07.394 EAL: request: mp_malloc_sync 00:05:07.394 EAL: No shared files mode enabled, IPC is disabled 00:05:07.394 EAL: Heap on socket 0 was expanded by 2MB 00:05:07.394 EAL: No shared files mode enabled, IPC is disabled 00:05:07.394 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:07.394 EAL: Mem event callback 'spdk:(nil)' registered 00:05:07.394 00:05:07.394 00:05:07.394 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.394 http://cunit.sourceforge.net/ 00:05:07.394 00:05:07.394 00:05:07.394 Suite: components_suite 00:05:07.394 Test: vtophys_malloc_test ...passed 00:05:07.394 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:07.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.394 EAL: Restoring previous memory policy: 4 00:05:07.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.394 EAL: request: mp_malloc_sync 00:05:07.394 EAL: No shared files mode enabled, IPC is disabled 00:05:07.394 EAL: Heap on socket 0 was expanded by 4MB 00:05:07.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.394 EAL: request: mp_malloc_sync 00:05:07.394 EAL: No shared files mode enabled, IPC is disabled 00:05:07.394 EAL: Heap on socket 0 was shrunk by 4MB 00:05:07.394 EAL: Trying to obtain current memory policy. 00:05:07.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.394 EAL: Restoring previous memory policy: 4 00:05:07.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.394 EAL: request: mp_malloc_sync 00:05:07.394 EAL: No shared files mode enabled, IPC is disabled 00:05:07.394 EAL: Heap on socket 0 was expanded by 6MB 00:05:07.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.652 EAL: request: mp_malloc_sync 00:05:07.652 EAL: No shared files mode enabled, IPC is disabled 00:05:07.652 EAL: Heap on socket 0 was shrunk by 6MB 00:05:07.652 EAL: Trying to obtain current memory policy. 00:05:07.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.652 EAL: Restoring previous memory policy: 4 00:05:07.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.652 EAL: request: mp_malloc_sync 00:05:07.652 EAL: No shared files mode enabled, IPC is disabled 00:05:07.652 EAL: Heap on socket 0 was expanded by 10MB 00:05:07.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.652 EAL: request: mp_malloc_sync 00:05:07.652 EAL: No shared files mode enabled, IPC is disabled 00:05:07.652 EAL: Heap on socket 0 was shrunk by 10MB 00:05:07.652 EAL: Trying to obtain current memory policy. 00:05:07.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.652 EAL: Restoring previous memory policy: 4 00:05:07.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.652 EAL: request: mp_malloc_sync 00:05:07.652 EAL: No shared files mode enabled, IPC is disabled 00:05:07.652 EAL: Heap on socket 0 was expanded by 18MB 00:05:07.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.652 EAL: request: mp_malloc_sync 00:05:07.652 EAL: No shared files mode enabled, IPC is disabled 00:05:07.652 EAL: Heap on socket 0 was shrunk by 18MB 00:05:07.652 EAL: Trying to obtain current memory policy. 00:05:07.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.652 EAL: Restoring previous memory policy: 4 00:05:07.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.652 EAL: request: mp_malloc_sync 00:05:07.652 EAL: No shared files mode enabled, IPC is disabled 00:05:07.652 EAL: Heap on socket 0 was expanded by 34MB 00:05:07.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.652 EAL: request: mp_malloc_sync 00:05:07.652 EAL: No shared files mode enabled, IPC is disabled 00:05:07.652 EAL: Heap on socket 0 was shrunk by 34MB 00:05:07.652 EAL: Trying to obtain current memory policy. 00:05:07.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.652 EAL: Restoring previous memory policy: 4 00:05:07.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.652 EAL: request: mp_malloc_sync 00:05:07.652 EAL: No shared files mode enabled, IPC is disabled 00:05:07.652 EAL: Heap on socket 0 was expanded by 66MB 00:05:07.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.911 EAL: request: mp_malloc_sync 00:05:07.911 EAL: No shared files mode enabled, IPC is disabled 00:05:07.911 EAL: Heap on socket 0 was shrunk by 66MB 00:05:07.911 EAL: Trying to obtain current memory policy. 00:05:07.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.911 EAL: Restoring previous memory policy: 4 00:05:07.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.911 EAL: request: mp_malloc_sync 00:05:07.911 EAL: No shared files mode enabled, IPC is disabled 00:05:07.911 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.169 EAL: request: mp_malloc_sync 00:05:08.169 EAL: No shared files mode enabled, IPC is disabled 00:05:08.169 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.427 EAL: Trying to obtain current memory policy. 00:05:08.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.427 EAL: Restoring previous memory policy: 4 00:05:08.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.427 EAL: request: mp_malloc_sync 00:05:08.427 EAL: No shared files mode enabled, IPC is disabled 00:05:08.427 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.993 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.993 EAL: request: mp_malloc_sync 00:05:08.993 EAL: No shared files mode enabled, IPC is disabled 00:05:08.993 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.556 EAL: Trying to obtain current memory policy. 00:05:09.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.556 EAL: Restoring previous memory policy: 4 00:05:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.556 EAL: request: mp_malloc_sync 00:05:09.556 EAL: No shared files mode enabled, IPC is disabled 00:05:09.556 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.487 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.488 EAL: request: mp_malloc_sync 00:05:10.488 EAL: No shared files mode enabled, IPC is disabled 00:05:10.488 EAL: Heap on socket 0 was shrunk by 514MB 00:05:11.421 EAL: Trying to obtain current memory policy. 00:05:11.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.679 EAL: Restoring previous memory policy: 4 00:05:11.679 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.679 EAL: request: mp_malloc_sync 00:05:11.679 EAL: No shared files mode enabled, IPC is disabled 00:05:11.679 EAL: Heap on socket 0 was expanded by 1026MB 00:05:13.576 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.834 EAL: request: mp_malloc_sync 00:05:13.834 EAL: No shared files mode enabled, IPC is disabled 00:05:13.834 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:15.207 passed 00:05:15.207 00:05:15.207 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.207 suites 1 1 n/a 0 0 00:05:15.207 tests 2 2 2 0 0 00:05:15.207 asserts 497 497 497 0 n/a 00:05:15.207 00:05:15.207 Elapsed time = 8.031 seconds 00:05:15.207 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.207 EAL: request: mp_malloc_sync 00:05:15.207 EAL: No shared files mode enabled, IPC is disabled 00:05:15.207 EAL: Heap on socket 0 was shrunk by 2MB 00:05:15.207 EAL: No shared files mode enabled, IPC is disabled 00:05:15.207 EAL: No shared files mode enabled, IPC is disabled 00:05:15.207 EAL: No shared files mode enabled, IPC is disabled 00:05:15.207 00:05:15.207 real 0m8.306s 00:05:15.207 user 0m7.425s 00:05:15.207 sys 0m0.825s 00:05:15.207 06:54:29 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.207 06:54:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:15.207 ************************************ 00:05:15.207 END TEST env_vtophys 00:05:15.207 ************************************ 00:05:15.207 06:54:29 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:15.208 06:54:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.208 06:54:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.208 06:54:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.466 ************************************ 00:05:15.466 START TEST env_pci 00:05:15.466 ************************************ 00:05:15.466 06:54:29 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:15.466 00:05:15.466 00:05:15.466 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.466 http://cunit.sourceforge.net/ 00:05:15.466 00:05:15.466 00:05:15.466 Suite: pci 00:05:15.466 Test: pci_hook ...[2024-07-24 06:54:29.890227] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1435489 has claimed it 00:05:15.466 EAL: Cannot find device (10000:00:01.0) 00:05:15.466 EAL: Failed to attach device on primary process 00:05:15.466 passed 00:05:15.466 00:05:15.466 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.466 suites 1 1 n/a 0 0 00:05:15.466 tests 1 1 1 0 0 00:05:15.466 asserts 25 25 25 0 n/a 00:05:15.466 00:05:15.466 Elapsed time = 0.065 seconds 00:05:15.466 00:05:15.466 real 0m0.151s 00:05:15.466 user 0m0.060s 00:05:15.466 sys 0m0.090s 00:05:15.466 06:54:30 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.466 06:54:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:15.466 ************************************ 00:05:15.466 END TEST env_pci 00:05:15.466 ************************************ 00:05:15.466 06:54:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:15.466 06:54:30 env -- env/env.sh@15 -- # uname 00:05:15.466 06:54:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:15.466 06:54:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:15.466 06:54:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:15.466 06:54:30 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:15.466 06:54:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.466 06:54:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.466 ************************************ 00:05:15.466 START TEST env_dpdk_post_init 00:05:15.466 ************************************ 00:05:15.466 06:54:30 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:15.724 EAL: Detected CPU lcores: 112 00:05:15.724 EAL: Detected NUMA nodes: 2 00:05:15.724 EAL: Detected shared linkage of DPDK 00:05:15.724 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.724 EAL: Selected IOVA mode 'VA' 00:05:15.724 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.724 EAL: VFIO support initialized 00:05:15.724 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.724 EAL: Using IOMMU type 1 (Type 1) 00:05:15.983 EAL: Ignore mapping IO port bar(1) 00:05:15.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:15.983 EAL: Ignore mapping IO port bar(1) 00:05:15.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:15.983 EAL: Ignore mapping IO port bar(1) 00:05:15.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:15.983 EAL: Ignore mapping IO port bar(1) 00:05:15.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:15.983 EAL: Ignore mapping IO port bar(1) 00:05:15.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:15.983 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:16.922 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:21.115 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:21.115 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:21.115 Starting DPDK initialization... 00:05:21.115 Starting SPDK post initialization... 00:05:21.115 SPDK NVMe probe 00:05:21.115 Attaching to 0000:d8:00.0 00:05:21.115 Attached to 0000:d8:00.0 00:05:21.115 Cleaning up... 00:05:21.115 00:05:21.115 real 0m5.373s 00:05:21.115 user 0m3.966s 00:05:21.115 sys 0m0.467s 00:05:21.115 06:54:35 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.115 06:54:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.115 ************************************ 00:05:21.116 END TEST env_dpdk_post_init 00:05:21.116 ************************************ 00:05:21.116 06:54:35 env -- env/env.sh@26 -- # uname 00:05:21.116 06:54:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:21.116 06:54:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:21.116 06:54:35 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.116 06:54:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.116 06:54:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.116 ************************************ 00:05:21.116 START TEST env_mem_callbacks 00:05:21.116 ************************************ 00:05:21.116 06:54:35 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:21.116 EAL: Detected CPU lcores: 112 00:05:21.116 EAL: Detected NUMA nodes: 2 00:05:21.116 EAL: Detected shared linkage of DPDK 00:05:21.116 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.116 EAL: Selected IOVA mode 'VA' 00:05:21.116 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.116 EAL: VFIO support initialized 00:05:21.116 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.116 00:05:21.116 00:05:21.116 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.116 http://cunit.sourceforge.net/ 00:05:21.116 00:05:21.116 00:05:21.116 Suite: memory 00:05:21.116 Test: test ... 00:05:21.116 register 0x200000200000 2097152 00:05:21.116 malloc 3145728 00:05:21.116 register 0x200000400000 4194304 00:05:21.116 buf 0x2000004fffc0 len 3145728 PASSED 00:05:21.116 malloc 64 00:05:21.116 buf 0x2000004ffec0 len 64 PASSED 00:05:21.116 malloc 4194304 00:05:21.116 register 0x200000800000 6291456 00:05:21.116 buf 0x2000009fffc0 len 4194304 PASSED 00:05:21.116 free 0x2000004fffc0 3145728 00:05:21.116 free 0x2000004ffec0 64 00:05:21.116 unregister 0x200000400000 4194304 PASSED 00:05:21.116 free 0x2000009fffc0 4194304 00:05:21.116 unregister 0x200000800000 6291456 PASSED 00:05:21.116 malloc 8388608 00:05:21.116 register 0x200000400000 10485760 00:05:21.116 buf 0x2000005fffc0 len 8388608 PASSED 00:05:21.116 free 0x2000005fffc0 8388608 00:05:21.116 unregister 0x200000400000 10485760 PASSED 00:05:21.116 passed 00:05:21.116 00:05:21.116 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.116 suites 1 1 n/a 0 0 00:05:21.116 tests 1 1 1 0 0 00:05:21.116 asserts 15 15 15 0 n/a 00:05:21.116 00:05:21.116 Elapsed time = 0.062 seconds 00:05:21.116 00:05:21.116 real 0m0.194s 00:05:21.116 user 0m0.093s 00:05:21.116 sys 0m0.100s 00:05:21.116 06:54:35 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.116 06:54:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:21.116 ************************************ 00:05:21.116 END TEST env_mem_callbacks 00:05:21.116 ************************************ 00:05:21.375 00:05:21.375 real 0m14.729s 00:05:21.375 user 0m11.921s 00:05:21.375 sys 0m1.834s 00:05:21.375 06:54:35 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.375 06:54:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.375 ************************************ 00:05:21.375 END TEST env 00:05:21.375 ************************************ 00:05:21.375 06:54:35 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:21.375 06:54:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.375 06:54:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.375 06:54:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.375 ************************************ 00:05:21.375 START TEST rpc 00:05:21.375 ************************************ 00:05:21.375 06:54:35 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:21.375 * Looking for test storage... 00:05:21.375 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:21.375 06:54:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1436682 00:05:21.375 06:54:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:21.375 06:54:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.375 06:54:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1436682 00:05:21.375 06:54:35 rpc -- common/autotest_common.sh@829 -- # '[' -z 1436682 ']' 00:05:21.375 06:54:35 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.375 06:54:35 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.375 06:54:35 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.375 06:54:35 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.375 06:54:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.634 [2024-07-24 06:54:36.057422] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:21.634 [2024-07-24 06:54:36.057517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436682 ] 00:05:21.634 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.634 [2024-07-24 06:54:36.201919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.894 [2024-07-24 06:54:36.413226] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:21.894 [2024-07-24 06:54:36.413272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1436682' to capture a snapshot of events at runtime. 00:05:21.894 [2024-07-24 06:54:36.413285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:21.894 [2024-07-24 06:54:36.413318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:21.894 [2024-07-24 06:54:36.413328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1436682 for offline analysis/debug. 00:05:21.894 [2024-07-24 06:54:36.413373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.832 06:54:37 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.832 06:54:37 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:22.832 06:54:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:22.832 06:54:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:22.832 06:54:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:22.832 06:54:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:22.832 06:54:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.832 06:54:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.832 06:54:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.832 ************************************ 00:05:22.832 START TEST rpc_integrity 00:05:22.832 ************************************ 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.832 { 00:05:22.832 "name": "Malloc0", 00:05:22.832 "aliases": [ 00:05:22.832 "de918b05-c8d0-4f53-bac0-7dac870947c6" 00:05:22.832 ], 00:05:22.832 "product_name": "Malloc disk", 00:05:22.832 "block_size": 512, 00:05:22.832 "num_blocks": 16384, 00:05:22.832 "uuid": "de918b05-c8d0-4f53-bac0-7dac870947c6", 00:05:22.832 "assigned_rate_limits": { 00:05:22.832 "rw_ios_per_sec": 0, 00:05:22.832 "rw_mbytes_per_sec": 0, 00:05:22.832 "r_mbytes_per_sec": 0, 00:05:22.832 "w_mbytes_per_sec": 0 00:05:22.832 }, 00:05:22.832 "claimed": false, 00:05:22.832 "zoned": false, 00:05:22.832 "supported_io_types": { 00:05:22.832 "read": true, 00:05:22.832 "write": true, 00:05:22.832 "unmap": true, 00:05:22.832 "flush": true, 00:05:22.832 "reset": true, 00:05:22.832 "nvme_admin": false, 00:05:22.832 "nvme_io": false, 00:05:22.832 "nvme_io_md": false, 00:05:22.832 "write_zeroes": true, 00:05:22.832 "zcopy": true, 00:05:22.832 "get_zone_info": false, 00:05:22.832 "zone_management": false, 00:05:22.832 "zone_append": false, 00:05:22.832 "compare": false, 00:05:22.832 "compare_and_write": false, 00:05:22.832 "abort": true, 00:05:22.832 "seek_hole": false, 00:05:22.832 "seek_data": false, 00:05:22.832 "copy": true, 00:05:22.832 "nvme_iov_md": false 00:05:22.832 }, 00:05:22.832 "memory_domains": [ 00:05:22.832 { 00:05:22.832 "dma_device_id": "system", 00:05:22.832 "dma_device_type": 1 00:05:22.832 }, 00:05:22.832 { 00:05:22.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.832 "dma_device_type": 2 00:05:22.832 } 00:05:22.832 ], 00:05:22.832 "driver_specific": {} 00:05:22.832 } 00:05:22.832 ]' 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.832 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.832 [2024-07-24 06:54:37.457834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:22.832 [2024-07-24 06:54:37.457892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.832 [2024-07-24 06:54:37.457921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021680 00:05:22.832 [2024-07-24 06:54:37.457935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.832 [2024-07-24 06:54:37.460009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.832 [2024-07-24 06:54:37.460043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.832 Passthru0 00:05:22.832 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:23.092 { 00:05:23.092 "name": "Malloc0", 00:05:23.092 "aliases": [ 00:05:23.092 "de918b05-c8d0-4f53-bac0-7dac870947c6" 00:05:23.092 ], 00:05:23.092 "product_name": "Malloc disk", 00:05:23.092 "block_size": 512, 00:05:23.092 "num_blocks": 16384, 00:05:23.092 "uuid": "de918b05-c8d0-4f53-bac0-7dac870947c6", 00:05:23.092 "assigned_rate_limits": { 00:05:23.092 "rw_ios_per_sec": 0, 00:05:23.092 "rw_mbytes_per_sec": 0, 00:05:23.092 "r_mbytes_per_sec": 0, 00:05:23.092 "w_mbytes_per_sec": 0 00:05:23.092 }, 00:05:23.092 "claimed": true, 00:05:23.092 "claim_type": "exclusive_write", 00:05:23.092 "zoned": false, 00:05:23.092 "supported_io_types": { 00:05:23.092 "read": true, 00:05:23.092 "write": true, 00:05:23.092 "unmap": true, 00:05:23.092 "flush": true, 00:05:23.092 "reset": true, 00:05:23.092 "nvme_admin": false, 00:05:23.092 "nvme_io": false, 00:05:23.092 "nvme_io_md": false, 00:05:23.092 "write_zeroes": true, 00:05:23.092 "zcopy": true, 00:05:23.092 "get_zone_info": false, 00:05:23.092 "zone_management": false, 00:05:23.092 "zone_append": false, 00:05:23.092 "compare": false, 00:05:23.092 "compare_and_write": false, 00:05:23.092 "abort": true, 00:05:23.092 "seek_hole": false, 00:05:23.092 "seek_data": false, 00:05:23.092 "copy": true, 00:05:23.092 "nvme_iov_md": false 00:05:23.092 }, 00:05:23.092 "memory_domains": [ 00:05:23.092 { 00:05:23.092 "dma_device_id": "system", 00:05:23.092 "dma_device_type": 1 00:05:23.092 }, 00:05:23.092 { 00:05:23.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.092 "dma_device_type": 2 00:05:23.092 } 00:05:23.092 ], 00:05:23.092 "driver_specific": {} 00:05:23.092 }, 00:05:23.092 { 00:05:23.092 "name": "Passthru0", 00:05:23.092 "aliases": [ 00:05:23.092 "f8943247-2558-5e92-a878-4e312e6df1aa" 00:05:23.092 ], 00:05:23.092 "product_name": "passthru", 00:05:23.092 "block_size": 512, 00:05:23.092 "num_blocks": 16384, 00:05:23.092 "uuid": "f8943247-2558-5e92-a878-4e312e6df1aa", 00:05:23.092 "assigned_rate_limits": { 00:05:23.092 "rw_ios_per_sec": 0, 00:05:23.092 "rw_mbytes_per_sec": 0, 00:05:23.092 "r_mbytes_per_sec": 0, 00:05:23.092 "w_mbytes_per_sec": 0 00:05:23.092 }, 00:05:23.092 "claimed": false, 00:05:23.092 "zoned": false, 00:05:23.092 "supported_io_types": { 00:05:23.092 "read": true, 00:05:23.092 "write": true, 00:05:23.092 "unmap": true, 00:05:23.092 "flush": true, 00:05:23.092 "reset": true, 00:05:23.092 "nvme_admin": false, 00:05:23.092 "nvme_io": false, 00:05:23.092 "nvme_io_md": false, 00:05:23.092 "write_zeroes": true, 00:05:23.092 "zcopy": true, 00:05:23.092 "get_zone_info": false, 00:05:23.092 "zone_management": false, 00:05:23.092 "zone_append": false, 00:05:23.092 "compare": false, 00:05:23.092 "compare_and_write": false, 00:05:23.092 "abort": true, 00:05:23.092 "seek_hole": false, 00:05:23.092 "seek_data": false, 00:05:23.092 "copy": true, 00:05:23.092 "nvme_iov_md": false 00:05:23.092 }, 00:05:23.092 "memory_domains": [ 00:05:23.092 { 00:05:23.092 "dma_device_id": "system", 00:05:23.092 "dma_device_type": 1 00:05:23.092 }, 00:05:23.092 { 00:05:23.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.092 "dma_device_type": 2 00:05:23.092 } 00:05:23.092 ], 00:05:23.092 "driver_specific": { 00:05:23.092 "passthru": { 00:05:23.092 "name": "Passthru0", 00:05:23.092 "base_bdev_name": "Malloc0" 00:05:23.092 } 00:05:23.092 } 00:05:23.092 } 00:05:23.092 ]' 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:23.092 06:54:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:23.092 00:05:23.092 real 0m0.316s 00:05:23.092 user 0m0.173s 00:05:23.092 sys 0m0.046s 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.092 06:54:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.092 ************************************ 00:05:23.092 END TEST rpc_integrity 00:05:23.092 ************************************ 00:05:23.092 06:54:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:23.092 06:54:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.092 06:54:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.092 06:54:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.092 ************************************ 00:05:23.092 START TEST rpc_plugins 00:05:23.092 ************************************ 00:05:23.092 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:23.092 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:23.092 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.092 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:23.352 { 00:05:23.352 "name": "Malloc1", 00:05:23.352 "aliases": [ 00:05:23.352 "876078ba-4d7d-4c72-9cd9-51f43dd97405" 00:05:23.352 ], 00:05:23.352 "product_name": "Malloc disk", 00:05:23.352 "block_size": 4096, 00:05:23.352 "num_blocks": 256, 00:05:23.352 "uuid": "876078ba-4d7d-4c72-9cd9-51f43dd97405", 00:05:23.352 "assigned_rate_limits": { 00:05:23.352 "rw_ios_per_sec": 0, 00:05:23.352 "rw_mbytes_per_sec": 0, 00:05:23.352 "r_mbytes_per_sec": 0, 00:05:23.352 "w_mbytes_per_sec": 0 00:05:23.352 }, 00:05:23.352 "claimed": false, 00:05:23.352 "zoned": false, 00:05:23.352 "supported_io_types": { 00:05:23.352 "read": true, 00:05:23.352 "write": true, 00:05:23.352 "unmap": true, 00:05:23.352 "flush": true, 00:05:23.352 "reset": true, 00:05:23.352 "nvme_admin": false, 00:05:23.352 "nvme_io": false, 00:05:23.352 "nvme_io_md": false, 00:05:23.352 "write_zeroes": true, 00:05:23.352 "zcopy": true, 00:05:23.352 "get_zone_info": false, 00:05:23.352 "zone_management": false, 00:05:23.352 "zone_append": false, 00:05:23.352 "compare": false, 00:05:23.352 "compare_and_write": false, 00:05:23.352 "abort": true, 00:05:23.352 "seek_hole": false, 00:05:23.352 "seek_data": false, 00:05:23.352 "copy": true, 00:05:23.352 "nvme_iov_md": false 00:05:23.352 }, 00:05:23.352 "memory_domains": [ 00:05:23.352 { 00:05:23.352 "dma_device_id": "system", 00:05:23.352 "dma_device_type": 1 00:05:23.352 }, 00:05:23.352 { 00:05:23.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.352 "dma_device_type": 2 00:05:23.352 } 00:05:23.352 ], 00:05:23.352 "driver_specific": {} 00:05:23.352 } 00:05:23.352 ]' 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:23.352 06:54:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:23.352 00:05:23.352 real 0m0.150s 00:05:23.352 user 0m0.087s 00:05:23.352 sys 0m0.027s 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.352 06:54:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:23.352 ************************************ 00:05:23.352 END TEST rpc_plugins 00:05:23.352 ************************************ 00:05:23.352 06:54:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:23.352 06:54:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.352 06:54:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.352 06:54:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.352 ************************************ 00:05:23.352 START TEST rpc_trace_cmd_test 00:05:23.352 ************************************ 00:05:23.352 06:54:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:23.352 06:54:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:23.352 06:54:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:23.352 06:54:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.352 06:54:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.352 06:54:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.352 06:54:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:23.352 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1436682", 00:05:23.352 "tpoint_group_mask": "0x8", 00:05:23.352 "iscsi_conn": { 00:05:23.352 "mask": "0x2", 00:05:23.352 "tpoint_mask": "0x0" 00:05:23.352 }, 00:05:23.352 "scsi": { 00:05:23.352 "mask": "0x4", 00:05:23.352 "tpoint_mask": "0x0" 00:05:23.352 }, 00:05:23.352 "bdev": { 00:05:23.352 "mask": "0x8", 00:05:23.352 "tpoint_mask": "0xffffffffffffffff" 00:05:23.352 }, 00:05:23.352 "nvmf_rdma": { 00:05:23.352 "mask": "0x10", 00:05:23.352 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "nvmf_tcp": { 00:05:23.353 "mask": "0x20", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "ftl": { 00:05:23.353 "mask": "0x40", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "blobfs": { 00:05:23.353 "mask": "0x80", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "dsa": { 00:05:23.353 "mask": "0x200", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "thread": { 00:05:23.353 "mask": "0x400", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "nvme_pcie": { 00:05:23.353 "mask": "0x800", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "iaa": { 00:05:23.353 "mask": "0x1000", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "nvme_tcp": { 00:05:23.353 "mask": "0x2000", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "bdev_nvme": { 00:05:23.353 "mask": "0x4000", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 }, 00:05:23.353 "sock": { 00:05:23.353 "mask": "0x8000", 00:05:23.353 "tpoint_mask": "0x0" 00:05:23.353 } 00:05:23.353 }' 00:05:23.353 06:54:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:23.611 06:54:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:23.611 06:54:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:23.611 06:54:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:23.611 06:54:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:23.611 06:54:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:23.611 06:54:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:23.611 06:54:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:23.611 06:54:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:23.612 06:54:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:23.612 00:05:23.612 real 0m0.224s 00:05:23.612 user 0m0.192s 00:05:23.612 sys 0m0.023s 00:05:23.612 06:54:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.612 06:54:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.612 ************************************ 00:05:23.612 END TEST rpc_trace_cmd_test 00:05:23.612 ************************************ 00:05:23.612 06:54:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:23.612 06:54:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:23.612 06:54:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:23.612 06:54:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.612 06:54:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.612 06:54:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.871 ************************************ 00:05:23.871 START TEST rpc_daemon_integrity 00:05:23.871 ************************************ 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.871 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:23.871 { 00:05:23.871 "name": "Malloc2", 00:05:23.871 "aliases": [ 00:05:23.871 "c0a65405-5c21-4cad-b165-49f5a0d1fec2" 00:05:23.871 ], 00:05:23.871 "product_name": "Malloc disk", 00:05:23.871 "block_size": 512, 00:05:23.871 "num_blocks": 16384, 00:05:23.871 "uuid": "c0a65405-5c21-4cad-b165-49f5a0d1fec2", 00:05:23.871 "assigned_rate_limits": { 00:05:23.871 "rw_ios_per_sec": 0, 00:05:23.871 "rw_mbytes_per_sec": 0, 00:05:23.871 "r_mbytes_per_sec": 0, 00:05:23.871 "w_mbytes_per_sec": 0 00:05:23.871 }, 00:05:23.871 "claimed": false, 00:05:23.871 "zoned": false, 00:05:23.872 "supported_io_types": { 00:05:23.872 "read": true, 00:05:23.872 "write": true, 00:05:23.872 "unmap": true, 00:05:23.872 "flush": true, 00:05:23.872 "reset": true, 00:05:23.872 "nvme_admin": false, 00:05:23.872 "nvme_io": false, 00:05:23.872 "nvme_io_md": false, 00:05:23.872 "write_zeroes": true, 00:05:23.872 "zcopy": true, 00:05:23.872 "get_zone_info": false, 00:05:23.872 "zone_management": false, 00:05:23.872 "zone_append": false, 00:05:23.872 "compare": false, 00:05:23.872 "compare_and_write": false, 00:05:23.872 "abort": true, 00:05:23.872 "seek_hole": false, 00:05:23.872 "seek_data": false, 00:05:23.872 "copy": true, 00:05:23.872 "nvme_iov_md": false 00:05:23.872 }, 00:05:23.872 "memory_domains": [ 00:05:23.872 { 00:05:23.872 "dma_device_id": "system", 00:05:23.872 "dma_device_type": 1 00:05:23.872 }, 00:05:23.872 { 00:05:23.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.872 "dma_device_type": 2 00:05:23.872 } 00:05:23.872 ], 00:05:23.872 "driver_specific": {} 00:05:23.872 } 00:05:23.872 ]' 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.872 [2024-07-24 06:54:38.388939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:23.872 [2024-07-24 06:54:38.388987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:23.872 [2024-07-24 06:54:38.389008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:05:23.872 [2024-07-24 06:54:38.389022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:23.872 [2024-07-24 06:54:38.391087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:23.872 [2024-07-24 06:54:38.391121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:23.872 Passthru0 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:23.872 { 00:05:23.872 "name": "Malloc2", 00:05:23.872 "aliases": [ 00:05:23.872 "c0a65405-5c21-4cad-b165-49f5a0d1fec2" 00:05:23.872 ], 00:05:23.872 "product_name": "Malloc disk", 00:05:23.872 "block_size": 512, 00:05:23.872 "num_blocks": 16384, 00:05:23.872 "uuid": "c0a65405-5c21-4cad-b165-49f5a0d1fec2", 00:05:23.872 "assigned_rate_limits": { 00:05:23.872 "rw_ios_per_sec": 0, 00:05:23.872 "rw_mbytes_per_sec": 0, 00:05:23.872 "r_mbytes_per_sec": 0, 00:05:23.872 "w_mbytes_per_sec": 0 00:05:23.872 }, 00:05:23.872 "claimed": true, 00:05:23.872 "claim_type": "exclusive_write", 00:05:23.872 "zoned": false, 00:05:23.872 "supported_io_types": { 00:05:23.872 "read": true, 00:05:23.872 "write": true, 00:05:23.872 "unmap": true, 00:05:23.872 "flush": true, 00:05:23.872 "reset": true, 00:05:23.872 "nvme_admin": false, 00:05:23.872 "nvme_io": false, 00:05:23.872 "nvme_io_md": false, 00:05:23.872 "write_zeroes": true, 00:05:23.872 "zcopy": true, 00:05:23.872 "get_zone_info": false, 00:05:23.872 "zone_management": false, 00:05:23.872 "zone_append": false, 00:05:23.872 "compare": false, 00:05:23.872 "compare_and_write": false, 00:05:23.872 "abort": true, 00:05:23.872 "seek_hole": false, 00:05:23.872 "seek_data": false, 00:05:23.872 "copy": true, 00:05:23.872 "nvme_iov_md": false 00:05:23.872 }, 00:05:23.872 "memory_domains": [ 00:05:23.872 { 00:05:23.872 "dma_device_id": "system", 00:05:23.872 "dma_device_type": 1 00:05:23.872 }, 00:05:23.872 { 00:05:23.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.872 "dma_device_type": 2 00:05:23.872 } 00:05:23.872 ], 00:05:23.872 "driver_specific": {} 00:05:23.872 }, 00:05:23.872 { 00:05:23.872 "name": "Passthru0", 00:05:23.872 "aliases": [ 00:05:23.872 "ffc42000-9d0d-569a-a192-b329bb4b67a9" 00:05:23.872 ], 00:05:23.872 "product_name": "passthru", 00:05:23.872 "block_size": 512, 00:05:23.872 "num_blocks": 16384, 00:05:23.872 "uuid": "ffc42000-9d0d-569a-a192-b329bb4b67a9", 00:05:23.872 "assigned_rate_limits": { 00:05:23.872 "rw_ios_per_sec": 0, 00:05:23.872 "rw_mbytes_per_sec": 0, 00:05:23.872 "r_mbytes_per_sec": 0, 00:05:23.872 "w_mbytes_per_sec": 0 00:05:23.872 }, 00:05:23.872 "claimed": false, 00:05:23.872 "zoned": false, 00:05:23.872 "supported_io_types": { 00:05:23.872 "read": true, 00:05:23.872 "write": true, 00:05:23.872 "unmap": true, 00:05:23.872 "flush": true, 00:05:23.872 "reset": true, 00:05:23.872 "nvme_admin": false, 00:05:23.872 "nvme_io": false, 00:05:23.872 "nvme_io_md": false, 00:05:23.872 "write_zeroes": true, 00:05:23.872 "zcopy": true, 00:05:23.872 "get_zone_info": false, 00:05:23.872 "zone_management": false, 00:05:23.872 "zone_append": false, 00:05:23.872 "compare": false, 00:05:23.872 "compare_and_write": false, 00:05:23.872 "abort": true, 00:05:23.872 "seek_hole": false, 00:05:23.872 "seek_data": false, 00:05:23.872 "copy": true, 00:05:23.872 "nvme_iov_md": false 00:05:23.872 }, 00:05:23.872 "memory_domains": [ 00:05:23.872 { 00:05:23.872 "dma_device_id": "system", 00:05:23.872 "dma_device_type": 1 00:05:23.872 }, 00:05:23.872 { 00:05:23.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.872 "dma_device_type": 2 00:05:23.872 } 00:05:23.872 ], 00:05:23.872 "driver_specific": { 00:05:23.872 "passthru": { 00:05:23.872 "name": "Passthru0", 00:05:23.872 "base_bdev_name": "Malloc2" 00:05:23.872 } 00:05:23.872 } 00:05:23.872 } 00:05:23.872 ]' 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.872 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.132 00:05:24.132 real 0m0.305s 00:05:24.132 user 0m0.174s 00:05:24.132 sys 0m0.047s 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.132 06:54:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.132 ************************************ 00:05:24.132 END TEST rpc_daemon_integrity 00:05:24.132 ************************************ 00:05:24.132 06:54:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:24.132 06:54:38 rpc -- rpc/rpc.sh@84 -- # killprocess 1436682 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@948 -- # '[' -z 1436682 ']' 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@952 -- # kill -0 1436682 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@953 -- # uname 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1436682 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1436682' 00:05:24.132 killing process with pid 1436682 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@967 -- # kill 1436682 00:05:24.132 06:54:38 rpc -- common/autotest_common.sh@972 -- # wait 1436682 00:05:26.668 00:05:26.668 real 0m5.135s 00:05:26.668 user 0m5.632s 00:05:26.668 sys 0m1.008s 00:05:26.668 06:54:40 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.668 06:54:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.668 ************************************ 00:05:26.668 END TEST rpc 00:05:26.668 ************************************ 00:05:26.668 06:54:41 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:26.668 06:54:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.668 06:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.668 06:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:26.668 ************************************ 00:05:26.668 START TEST skip_rpc 00:05:26.668 ************************************ 00:05:26.668 06:54:41 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:26.668 * Looking for test storage... 00:05:26.668 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:26.668 06:54:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:26.668 06:54:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:26.668 06:54:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:26.668 06:54:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.668 06:54:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.668 06:54:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.668 ************************************ 00:05:26.668 START TEST skip_rpc 00:05:26.668 ************************************ 00:05:26.668 06:54:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:26.668 06:54:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:26.668 06:54:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1437830 00:05:26.668 06:54:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.668 06:54:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:26.668 [2024-07-24 06:54:41.270893] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:26.669 [2024-07-24 06:54:41.270987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437830 ] 00:05:26.928 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.928 [2024-07-24 06:54:41.413742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.187 [2024-07-24 06:54:41.616746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1437830 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1437830 ']' 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1437830 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1437830 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1437830' 00:05:32.506 killing process with pid 1437830 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1437830 00:05:32.506 06:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1437830 00:05:34.406 00:05:34.406 real 0m7.350s 00:05:34.406 user 0m6.941s 00:05:34.406 sys 0m0.421s 00:05:34.406 06:54:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.406 06:54:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 ************************************ 00:05:34.406 END TEST skip_rpc 00:05:34.406 ************************************ 00:05:34.406 06:54:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:34.406 06:54:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.406 06:54:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.406 06:54:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 ************************************ 00:05:34.406 START TEST skip_rpc_with_json 00:05:34.406 ************************************ 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1439068 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1439068 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1439068 ']' 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.406 06:54:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 [2024-07-24 06:54:48.697563] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:34.406 [2024-07-24 06:54:48.697664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439068 ] 00:05:34.406 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.406 [2024-07-24 06:54:48.842014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.664 [2024-07-24 06:54:49.048605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.599 [2024-07-24 06:54:49.885589] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:35.599 request: 00:05:35.599 { 00:05:35.599 "trtype": "tcp", 00:05:35.599 "method": "nvmf_get_transports", 00:05:35.599 "req_id": 1 00:05:35.599 } 00:05:35.599 Got JSON-RPC error response 00:05:35.599 response: 00:05:35.599 { 00:05:35.599 "code": -19, 00:05:35.599 "message": "No such device" 00:05:35.599 } 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.599 [2024-07-24 06:54:49.897716] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.599 06:54:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.599 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.599 06:54:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:35.599 { 00:05:35.599 "subsystems": [ 00:05:35.599 { 00:05:35.599 "subsystem": "keyring", 00:05:35.599 "config": [] 00:05:35.599 }, 00:05:35.599 { 00:05:35.599 "subsystem": "iobuf", 00:05:35.599 "config": [ 00:05:35.599 { 00:05:35.599 "method": "iobuf_set_options", 00:05:35.599 "params": { 00:05:35.599 "small_pool_count": 8192, 00:05:35.599 "large_pool_count": 1024, 00:05:35.599 "small_bufsize": 8192, 00:05:35.599 "large_bufsize": 135168 00:05:35.599 } 00:05:35.599 } 00:05:35.599 ] 00:05:35.599 }, 00:05:35.599 { 00:05:35.599 "subsystem": "sock", 00:05:35.599 "config": [ 00:05:35.599 { 00:05:35.599 "method": "sock_set_default_impl", 00:05:35.599 "params": { 00:05:35.599 "impl_name": "posix" 00:05:35.599 } 00:05:35.599 }, 00:05:35.599 { 00:05:35.599 "method": "sock_impl_set_options", 00:05:35.599 "params": { 00:05:35.599 "impl_name": "ssl", 00:05:35.599 "recv_buf_size": 4096, 00:05:35.599 "send_buf_size": 4096, 00:05:35.599 "enable_recv_pipe": true, 00:05:35.599 "enable_quickack": false, 00:05:35.599 "enable_placement_id": 0, 00:05:35.599 "enable_zerocopy_send_server": true, 00:05:35.599 "enable_zerocopy_send_client": false, 00:05:35.599 "zerocopy_threshold": 0, 00:05:35.599 "tls_version": 0, 00:05:35.599 "enable_ktls": false 00:05:35.599 } 00:05:35.599 }, 00:05:35.599 { 00:05:35.599 "method": "sock_impl_set_options", 00:05:35.599 "params": { 00:05:35.599 "impl_name": "posix", 00:05:35.599 "recv_buf_size": 2097152, 00:05:35.599 "send_buf_size": 2097152, 00:05:35.599 "enable_recv_pipe": true, 00:05:35.599 "enable_quickack": false, 00:05:35.599 "enable_placement_id": 0, 00:05:35.599 "enable_zerocopy_send_server": true, 00:05:35.599 "enable_zerocopy_send_client": false, 00:05:35.599 "zerocopy_threshold": 0, 00:05:35.599 "tls_version": 0, 00:05:35.599 "enable_ktls": false 00:05:35.599 } 00:05:35.599 } 00:05:35.599 ] 00:05:35.599 }, 00:05:35.599 { 00:05:35.599 "subsystem": "vmd", 00:05:35.599 "config": [] 00:05:35.599 }, 00:05:35.599 { 00:05:35.599 "subsystem": "accel", 00:05:35.599 "config": [ 00:05:35.599 { 00:05:35.599 "method": "accel_set_options", 00:05:35.599 "params": { 00:05:35.599 "small_cache_size": 128, 00:05:35.599 "large_cache_size": 16, 00:05:35.599 "task_count": 2048, 00:05:35.599 "sequence_count": 2048, 00:05:35.599 "buf_count": 2048 00:05:35.599 } 00:05:35.599 } 00:05:35.599 ] 00:05:35.599 }, 00:05:35.599 { 00:05:35.599 "subsystem": "bdev", 00:05:35.599 "config": [ 00:05:35.599 { 00:05:35.600 "method": "bdev_set_options", 00:05:35.600 "params": { 00:05:35.600 "bdev_io_pool_size": 65535, 00:05:35.600 "bdev_io_cache_size": 256, 00:05:35.600 "bdev_auto_examine": true, 00:05:35.600 "iobuf_small_cache_size": 128, 00:05:35.600 "iobuf_large_cache_size": 16 00:05:35.600 } 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "method": "bdev_raid_set_options", 00:05:35.600 "params": { 00:05:35.600 "process_window_size_kb": 1024, 00:05:35.600 "process_max_bandwidth_mb_sec": 0 00:05:35.600 } 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "method": "bdev_iscsi_set_options", 00:05:35.600 "params": { 00:05:35.600 "timeout_sec": 30 00:05:35.600 } 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "method": "bdev_nvme_set_options", 00:05:35.600 "params": { 00:05:35.600 "action_on_timeout": "none", 00:05:35.600 "timeout_us": 0, 00:05:35.600 "timeout_admin_us": 0, 00:05:35.600 "keep_alive_timeout_ms": 10000, 00:05:35.600 "arbitration_burst": 0, 00:05:35.600 "low_priority_weight": 0, 00:05:35.600 "medium_priority_weight": 0, 00:05:35.600 "high_priority_weight": 0, 00:05:35.600 "nvme_adminq_poll_period_us": 10000, 00:05:35.600 "nvme_ioq_poll_period_us": 0, 00:05:35.600 "io_queue_requests": 0, 00:05:35.600 "delay_cmd_submit": true, 00:05:35.600 "transport_retry_count": 4, 00:05:35.600 "bdev_retry_count": 3, 00:05:35.600 "transport_ack_timeout": 0, 00:05:35.600 "ctrlr_loss_timeout_sec": 0, 00:05:35.600 "reconnect_delay_sec": 0, 00:05:35.600 "fast_io_fail_timeout_sec": 0, 00:05:35.600 "disable_auto_failback": false, 00:05:35.600 "generate_uuids": false, 00:05:35.600 "transport_tos": 0, 00:05:35.600 "nvme_error_stat": false, 00:05:35.600 "rdma_srq_size": 0, 00:05:35.600 "io_path_stat": false, 00:05:35.600 "allow_accel_sequence": false, 00:05:35.600 "rdma_max_cq_size": 0, 00:05:35.600 "rdma_cm_event_timeout_ms": 0, 00:05:35.600 "dhchap_digests": [ 00:05:35.600 "sha256", 00:05:35.600 "sha384", 00:05:35.600 "sha512" 00:05:35.600 ], 00:05:35.600 "dhchap_dhgroups": [ 00:05:35.600 "null", 00:05:35.600 "ffdhe2048", 00:05:35.600 "ffdhe3072", 00:05:35.600 "ffdhe4096", 00:05:35.600 "ffdhe6144", 00:05:35.600 "ffdhe8192" 00:05:35.600 ] 00:05:35.600 } 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "method": "bdev_nvme_set_hotplug", 00:05:35.600 "params": { 00:05:35.600 "period_us": 100000, 00:05:35.600 "enable": false 00:05:35.600 } 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "method": "bdev_wait_for_examine" 00:05:35.600 } 00:05:35.600 ] 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "subsystem": "scsi", 00:05:35.600 "config": null 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "subsystem": "scheduler", 00:05:35.600 "config": [ 00:05:35.600 { 00:05:35.600 "method": "framework_set_scheduler", 00:05:35.600 "params": { 00:05:35.600 "name": "static" 00:05:35.600 } 00:05:35.600 } 00:05:35.600 ] 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "subsystem": "vhost_scsi", 00:05:35.600 "config": [] 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "subsystem": "vhost_blk", 00:05:35.600 "config": [] 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "subsystem": "ublk", 00:05:35.600 "config": [] 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "subsystem": "nbd", 00:05:35.600 "config": [] 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "subsystem": "nvmf", 00:05:35.600 "config": [ 00:05:35.600 { 00:05:35.600 "method": "nvmf_set_config", 00:05:35.600 "params": { 00:05:35.600 "discovery_filter": "match_any", 00:05:35.600 "admin_cmd_passthru": { 00:05:35.600 "identify_ctrlr": false 00:05:35.600 } 00:05:35.600 } 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "method": "nvmf_set_max_subsystems", 00:05:35.600 "params": { 00:05:35.600 "max_subsystems": 1024 00:05:35.600 } 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "method": "nvmf_set_crdt", 00:05:35.600 "params": { 00:05:35.600 "crdt1": 0, 00:05:35.600 "crdt2": 0, 00:05:35.600 "crdt3": 0 00:05:35.600 } 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "method": "nvmf_create_transport", 00:05:35.600 "params": { 00:05:35.600 "trtype": "TCP", 00:05:35.600 "max_queue_depth": 128, 00:05:35.600 "max_io_qpairs_per_ctrlr": 127, 00:05:35.600 "in_capsule_data_size": 4096, 00:05:35.600 "max_io_size": 131072, 00:05:35.600 "io_unit_size": 131072, 00:05:35.600 "max_aq_depth": 128, 00:05:35.600 "num_shared_buffers": 511, 00:05:35.600 "buf_cache_size": 4294967295, 00:05:35.600 "dif_insert_or_strip": false, 00:05:35.600 "zcopy": false, 00:05:35.600 "c2h_success": true, 00:05:35.600 "sock_priority": 0, 00:05:35.600 "abort_timeout_sec": 1, 00:05:35.600 "ack_timeout": 0, 00:05:35.600 "data_wr_pool_size": 0 00:05:35.600 } 00:05:35.600 } 00:05:35.600 ] 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "subsystem": "iscsi", 00:05:35.600 "config": [ 00:05:35.600 { 00:05:35.600 "method": "iscsi_set_options", 00:05:35.600 "params": { 00:05:35.600 "node_base": "iqn.2016-06.io.spdk", 00:05:35.600 "max_sessions": 128, 00:05:35.600 "max_connections_per_session": 2, 00:05:35.600 "max_queue_depth": 64, 00:05:35.600 "default_time2wait": 2, 00:05:35.600 "default_time2retain": 20, 00:05:35.600 "first_burst_length": 8192, 00:05:35.600 "immediate_data": true, 00:05:35.600 "allow_duplicated_isid": false, 00:05:35.600 "error_recovery_level": 0, 00:05:35.600 "nop_timeout": 60, 00:05:35.600 "nop_in_interval": 30, 00:05:35.600 "disable_chap": false, 00:05:35.600 "require_chap": false, 00:05:35.600 "mutual_chap": false, 00:05:35.600 "chap_group": 0, 00:05:35.600 "max_large_datain_per_connection": 64, 00:05:35.600 "max_r2t_per_connection": 4, 00:05:35.600 "pdu_pool_size": 36864, 00:05:35.600 "immediate_data_pool_size": 16384, 00:05:35.600 "data_out_pool_size": 2048 00:05:35.600 } 00:05:35.600 } 00:05:35.600 ] 00:05:35.600 } 00:05:35.600 ] 00:05:35.600 } 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1439068 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1439068 ']' 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1439068 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1439068 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1439068' 00:05:35.600 killing process with pid 1439068 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1439068 00:05:35.600 06:54:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1439068 00:05:38.131 06:54:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1439820 00:05:38.131 06:54:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.131 06:54:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1439820 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1439820 ']' 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1439820 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1439820 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1439820' 00:05:43.397 killing process with pid 1439820 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1439820 00:05:43.397 06:54:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1439820 00:05:45.299 06:54:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:45.299 06:54:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:45.299 00:05:45.299 real 0m11.195s 00:05:45.299 user 0m10.642s 00:05:45.299 sys 0m0.991s 00:05:45.299 06:54:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.299 06:54:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.299 ************************************ 00:05:45.299 END TEST skip_rpc_with_json 00:05:45.299 ************************************ 00:05:45.299 06:54:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:45.299 06:54:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.299 06:54:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.299 06:54:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.299 ************************************ 00:05:45.299 START TEST skip_rpc_with_delay 00:05:45.299 ************************************ 00:05:45.299 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:45.300 06:54:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.559 [2024-07-24 06:54:59.991256] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:45.559 [2024-07-24 06:54:59.991364] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:45.559 06:55:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:45.559 06:55:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.559 06:55:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.559 06:55:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.559 00:05:45.559 real 0m0.155s 00:05:45.559 user 0m0.081s 00:05:45.559 sys 0m0.074s 00:05:45.559 06:55:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.559 06:55:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:45.559 ************************************ 00:05:45.559 END TEST skip_rpc_with_delay 00:05:45.559 ************************************ 00:05:45.559 06:55:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:45.559 06:55:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:45.559 06:55:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:45.559 06:55:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.559 06:55:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.559 06:55:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.559 ************************************ 00:05:45.559 START TEST exit_on_failed_rpc_init 00:05:45.559 ************************************ 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1441204 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1441204 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1441204 ']' 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.559 06:55:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.818 [2024-07-24 06:55:00.232961] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:45.818 [2024-07-24 06:55:00.233059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441204 ] 00:05:45.818 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.818 [2024-07-24 06:55:00.378845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.076 [2024-07-24 06:55:00.580709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:47.014 06:55:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.014 [2024-07-24 06:55:01.522382] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:47.014 [2024-07-24 06:55:01.522473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441476 ] 00:05:47.014 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.274 [2024-07-24 06:55:01.668369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.274 [2024-07-24 06:55:01.882578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.274 [2024-07-24 06:55:01.882673] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:47.274 [2024-07-24 06:55:01.882693] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:47.274 [2024-07-24 06:55:01.882708] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1441204 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1441204 ']' 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1441204 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1441204 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1441204' 00:05:47.843 killing process with pid 1441204 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1441204 00:05:47.843 06:55:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1441204 00:05:50.432 00:05:50.432 real 0m4.578s 00:05:50.432 user 0m5.106s 00:05:50.432 sys 0m0.728s 00:05:50.432 06:55:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.432 06:55:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.432 ************************************ 00:05:50.432 END TEST exit_on_failed_rpc_init 00:05:50.432 ************************************ 00:05:50.432 06:55:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:50.432 00:05:50.432 real 0m23.696s 00:05:50.432 user 0m22.926s 00:05:50.432 sys 0m2.510s 00:05:50.432 06:55:04 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.432 06:55:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.432 ************************************ 00:05:50.432 END TEST skip_rpc 00:05:50.432 ************************************ 00:05:50.432 06:55:04 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:50.432 06:55:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.432 06:55:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.432 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.432 ************************************ 00:05:50.432 START TEST rpc_client 00:05:50.432 ************************************ 00:05:50.432 06:55:04 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:50.433 * Looking for test storage... 00:05:50.433 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:50.433 06:55:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:50.433 OK 00:05:50.433 06:55:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:50.433 00:05:50.433 real 0m0.161s 00:05:50.433 user 0m0.066s 00:05:50.433 sys 0m0.105s 00:05:50.433 06:55:05 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.433 06:55:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:50.433 ************************************ 00:05:50.433 END TEST rpc_client 00:05:50.433 ************************************ 00:05:50.433 06:55:05 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:50.433 06:55:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.433 06:55:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.433 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:05:50.700 ************************************ 00:05:50.700 START TEST json_config 00:05:50.700 ************************************ 00:05:50.700 06:55:05 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:50.700 06:55:05 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:50.700 06:55:05 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.700 06:55:05 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.700 06:55:05 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.700 06:55:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.700 06:55:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.700 06:55:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.700 06:55:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:50.700 06:55:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@47 -- # : 0 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:50.700 06:55:05 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:50.700 06:55:05 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:50.700 06:55:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:50.701 INFO: JSON configuration test init 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.701 06:55:05 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:50.701 06:55:05 json_config -- json_config/common.sh@9 -- # local app=target 00:05:50.701 06:55:05 json_config -- json_config/common.sh@10 -- # shift 00:05:50.701 06:55:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.701 06:55:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.701 06:55:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.701 06:55:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.701 06:55:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.701 06:55:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1442134 00:05:50.701 06:55:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.701 Waiting for target to run... 00:05:50.701 06:55:05 json_config -- json_config/common.sh@25 -- # waitforlisten 1442134 /var/tmp/spdk_tgt.sock 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@829 -- # '[' -z 1442134 ']' 00:05:50.701 06:55:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.701 06:55:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.701 [2024-07-24 06:55:05.293979] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:50.701 [2024-07-24 06:55:05.294086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442134 ] 00:05:50.959 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.217 [2024-07-24 06:55:05.652577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.475 [2024-07-24 06:55:05.849213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.475 06:55:06 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.475 06:55:06 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:51.475 06:55:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:51.475 00:05:51.475 06:55:06 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:51.475 06:55:06 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:51.475 06:55:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.475 06:55:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.475 06:55:06 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:51.475 06:55:06 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:51.475 06:55:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.475 06:55:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.475 06:55:06 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:51.475 06:55:06 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:51.475 06:55:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:55.657 06:55:09 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:55.657 06:55:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:55.657 06:55:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.657 06:55:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.657 06:55:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:55.657 06:55:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:55.657 06:55:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:55.657 06:55:09 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:55.657 06:55:09 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:55.657 06:55:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@51 -- # sort 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:55.657 06:55:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.657 06:55:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:55.657 06:55:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.657 06:55:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@237 -- # [[ rdma == \r\d\m\a ]] 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@238 -- # TEST_TRANSPORT=rdma 00:05:55.657 06:55:10 json_config -- json_config/json_config.sh@238 -- # nvmftestinit 00:05:55.657 06:55:10 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:55.657 06:55:10 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.657 06:55:10 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:55.657 06:55:10 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:55.657 06:55:10 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:55.657 06:55:10 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.657 06:55:10 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:55.657 06:55:10 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.657 06:55:10 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:55.657 06:55:10 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:55.657 06:55:10 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:55.657 06:55:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@296 -- # e810=() 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@297 -- # x722=() 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:06:05.616 06:55:18 json_config -- nvmf/common.sh@298 -- # mlx=() 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:05.617 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:05.617 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:05.617 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:05.617 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@58 -- # uname 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:05.617 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:05.617 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:05.617 altname enp217s0f0np0 00:06:05.617 altname ens818f0np0 00:06:05.617 inet 192.168.100.8/24 scope global mlx_0_0 00:06:05.617 valid_lft forever preferred_lft forever 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:05.617 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:05.617 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:05.617 altname enp217s0f1np1 00:06:05.617 altname ens818f1np1 00:06:05.617 inet 192.168.100.9/24 scope global mlx_0_1 00:06:05.617 valid_lft forever preferred_lft forever 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@422 -- # return 0 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:05.617 06:55:18 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:05.618 192.168.100.9' 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:05.618 192.168.100.9' 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@457 -- # head -n 1 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:05.618 192.168.100.9' 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@458 -- # head -n 1 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:05.618 06:55:18 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:05.618 06:55:18 json_config -- json_config/json_config.sh@241 -- # [[ -z 192.168.100.8 ]] 00:06:05.618 06:55:18 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.618 06:55:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.618 MallocForNvmf0 00:06:05.618 06:55:18 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.618 06:55:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.618 MallocForNvmf1 00:06:05.618 06:55:19 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:05.618 06:55:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:05.618 [2024-07-24 06:55:19.182731] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:05.618 [2024-07-24 06:55:19.217005] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7fd4c2dbd940) succeed. 00:06:05.618 [2024-07-24 06:55:19.229253] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7fd4c2d79940) succeed. 00:06:05.618 06:55:19 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.618 06:55:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.618 06:55:19 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.618 06:55:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.618 06:55:19 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:05.618 06:55:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:05.618 06:55:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:05.618 06:55:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:05.618 [2024-07-24 06:55:19.938226] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:05.618 06:55:19 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:05.618 06:55:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.618 06:55:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.618 06:55:19 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:05.618 06:55:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.618 06:55:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.618 06:55:20 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:05.618 06:55:20 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.618 06:55:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.618 MallocBdevForConfigChangeCheck 00:06:05.618 06:55:20 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:05.618 06:55:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.618 06:55:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.875 06:55:20 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:05.875 06:55:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.131 06:55:20 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:06.131 INFO: shutting down applications... 00:06:06.131 06:55:20 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:06.131 06:55:20 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:06.131 06:55:20 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:06.131 06:55:20 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:08.652 Calling clear_iscsi_subsystem 00:06:08.652 Calling clear_nvmf_subsystem 00:06:08.652 Calling clear_nbd_subsystem 00:06:08.652 Calling clear_ublk_subsystem 00:06:08.652 Calling clear_vhost_blk_subsystem 00:06:08.652 Calling clear_vhost_scsi_subsystem 00:06:08.652 Calling clear_bdev_subsystem 00:06:08.652 06:55:23 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:08.652 06:55:23 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:08.652 06:55:23 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:08.652 06:55:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:08.652 06:55:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:08.652 06:55:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:08.915 06:55:23 json_config -- json_config/json_config.sh@349 -- # break 00:06:08.915 06:55:23 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:08.915 06:55:23 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:08.915 06:55:23 json_config -- json_config/common.sh@31 -- # local app=target 00:06:08.915 06:55:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.915 06:55:23 json_config -- json_config/common.sh@35 -- # [[ -n 1442134 ]] 00:06:08.915 06:55:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1442134 00:06:08.915 06:55:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.915 06:55:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.915 06:55:23 json_config -- json_config/common.sh@41 -- # kill -0 1442134 00:06:08.915 06:55:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.482 06:55:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.482 06:55:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.482 06:55:23 json_config -- json_config/common.sh@41 -- # kill -0 1442134 00:06:09.482 06:55:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.048 06:55:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.048 06:55:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.048 06:55:24 json_config -- json_config/common.sh@41 -- # kill -0 1442134 00:06:10.048 06:55:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.048 06:55:24 json_config -- json_config/common.sh@43 -- # break 00:06:10.048 06:55:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.048 06:55:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.048 SPDK target shutdown done 00:06:10.048 06:55:24 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:10.048 INFO: relaunching applications... 00:06:10.048 06:55:24 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.048 06:55:24 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.048 06:55:24 json_config -- json_config/common.sh@10 -- # shift 00:06:10.048 06:55:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.048 06:55:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.048 06:55:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.048 06:55:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.048 06:55:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.048 06:55:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1448230 00:06:10.048 06:55:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.048 06:55:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.048 Waiting for target to run... 00:06:10.048 06:55:24 json_config -- json_config/common.sh@25 -- # waitforlisten 1448230 /var/tmp/spdk_tgt.sock 00:06:10.048 06:55:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 1448230 ']' 00:06:10.048 06:55:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.048 06:55:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.048 06:55:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.048 06:55:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.048 06:55:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.048 [2024-07-24 06:55:24.504514] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:10.048 [2024-07-24 06:55:24.504636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448230 ] 00:06:10.048 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.613 [2024-07-24 06:55:25.020360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.613 [2024-07-24 06:55:25.217570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.787 [2024-07-24 06:55:28.995595] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a340/0x7f641dc02940) succeed. 00:06:14.787 [2024-07-24 06:55:29.006210] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a4c0/0x7f641dbbe940) succeed. 00:06:14.787 [2024-07-24 06:55:29.070824] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:14.787 06:55:29 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.787 06:55:29 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:14.787 06:55:29 json_config -- json_config/common.sh@26 -- # echo '' 00:06:14.787 00:06:14.787 06:55:29 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:14.788 06:55:29 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:14.788 INFO: Checking if target configuration is the same... 00:06:14.788 06:55:29 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.788 06:55:29 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:14.788 06:55:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.788 + '[' 2 -ne 2 ']' 00:06:14.788 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:14.788 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:14.788 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:14.788 +++ basename /dev/fd/62 00:06:14.788 ++ mktemp /tmp/62.XXX 00:06:14.788 + tmp_file_1=/tmp/62.al6 00:06:14.788 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.788 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:14.788 + tmp_file_2=/tmp/spdk_tgt_config.json.UVn 00:06:14.788 + ret=0 00:06:14.788 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.044 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.044 + diff -u /tmp/62.al6 /tmp/spdk_tgt_config.json.UVn 00:06:15.044 + echo 'INFO: JSON config files are the same' 00:06:15.044 INFO: JSON config files are the same 00:06:15.044 + rm /tmp/62.al6 /tmp/spdk_tgt_config.json.UVn 00:06:15.044 + exit 0 00:06:15.044 06:55:29 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:15.044 06:55:29 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:15.044 INFO: changing configuration and checking if this can be detected... 00:06:15.044 06:55:29 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.044 06:55:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.339 06:55:29 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.339 06:55:29 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:15.339 06:55:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.339 + '[' 2 -ne 2 ']' 00:06:15.339 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.339 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:15.339 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:15.339 +++ basename /dev/fd/62 00:06:15.339 ++ mktemp /tmp/62.XXX 00:06:15.340 + tmp_file_1=/tmp/62.KK4 00:06:15.340 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.340 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.340 + tmp_file_2=/tmp/spdk_tgt_config.json.Qkt 00:06:15.340 + ret=0 00:06:15.340 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.627 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.627 + diff -u /tmp/62.KK4 /tmp/spdk_tgt_config.json.Qkt 00:06:15.627 + ret=1 00:06:15.627 + echo '=== Start of file: /tmp/62.KK4 ===' 00:06:15.627 + cat /tmp/62.KK4 00:06:15.627 + echo '=== End of file: /tmp/62.KK4 ===' 00:06:15.627 + echo '' 00:06:15.627 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Qkt ===' 00:06:15.627 + cat /tmp/spdk_tgt_config.json.Qkt 00:06:15.627 + echo '=== End of file: /tmp/spdk_tgt_config.json.Qkt ===' 00:06:15.627 + echo '' 00:06:15.627 + rm /tmp/62.KK4 /tmp/spdk_tgt_config.json.Qkt 00:06:15.627 + exit 1 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:15.627 INFO: configuration change detected. 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@321 -- # [[ -n 1448230 ]] 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.627 06:55:30 json_config -- json_config/json_config.sh@327 -- # killprocess 1448230 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@948 -- # '[' -z 1448230 ']' 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@952 -- # kill -0 1448230 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@953 -- # uname 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1448230 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1448230' 00:06:15.627 killing process with pid 1448230 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@967 -- # kill 1448230 00:06:15.627 06:55:30 json_config -- common/autotest_common.sh@972 -- # wait 1448230 00:06:18.937 06:55:33 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.937 06:55:33 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:18.937 06:55:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.937 06:55:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 06:55:33 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:19.196 06:55:33 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:19.196 INFO: Success 00:06:19.196 06:55:33 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:19.196 06:55:33 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:19.196 06:55:33 json_config -- nvmf/common.sh@117 -- # sync 00:06:19.196 06:55:33 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:19.196 06:55:33 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:19.196 06:55:33 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:19.196 06:55:33 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:19.196 06:55:33 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:19.196 00:06:19.196 real 0m28.508s 00:06:19.196 user 0m31.183s 00:06:19.196 sys 0m9.193s 00:06:19.196 06:55:33 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.196 06:55:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 ************************************ 00:06:19.196 END TEST json_config 00:06:19.196 ************************************ 00:06:19.196 06:55:33 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:19.196 06:55:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.196 06:55:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.196 06:55:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 ************************************ 00:06:19.196 START TEST json_config_extra_key 00:06:19.196 ************************************ 00:06:19.196 06:55:33 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:19.196 06:55:33 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.196 06:55:33 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.196 06:55:33 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.196 06:55:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.196 06:55:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.196 06:55:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.196 06:55:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:19.196 06:55:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.196 06:55:33 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:19.196 INFO: launching applications... 00:06:19.196 06:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1449952 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:19.196 Waiting for target to run... 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1449952 /var/tmp/spdk_tgt.sock 00:06:19.196 06:55:33 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1449952 ']' 00:06:19.196 06:55:33 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:19.196 06:55:33 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.196 06:55:33 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:19.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:19.196 06:55:33 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.196 06:55:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 06:55:33 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:19.455 [2024-07-24 06:55:33.854359] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:19.455 [2024-07-24 06:55:33.854457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449952 ] 00:06:19.455 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.712 [2024-07-24 06:55:34.216462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.970 [2024-07-24 06:55:34.399151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.534 06:55:35 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.534 06:55:35 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:20.534 06:55:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:20.534 00:06:20.534 06:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:20.534 INFO: shutting down applications... 00:06:20.534 06:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:20.534 06:55:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:20.534 06:55:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:20.534 06:55:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1449952 ]] 00:06:20.534 06:55:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1449952 00:06:20.534 06:55:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:20.534 06:55:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.534 06:55:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1449952 00:06:20.534 06:55:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.100 06:55:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.100 06:55:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.100 06:55:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1449952 00:06:21.100 06:55:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.665 06:55:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.665 06:55:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.665 06:55:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1449952 00:06:21.665 06:55:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.230 06:55:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.230 06:55:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.230 06:55:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1449952 00:06:22.230 06:55:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.795 06:55:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.795 06:55:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.795 06:55:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1449952 00:06:22.795 06:55:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:23.052 06:55:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:23.052 06:55:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.052 06:55:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1449952 00:06:23.052 06:55:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:23.618 06:55:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:23.618 06:55:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.618 06:55:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1449952 00:06:23.618 06:55:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:23.618 06:55:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:23.618 06:55:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:23.618 06:55:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:23.618 SPDK target shutdown done 00:06:23.618 06:55:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:23.618 Success 00:06:23.618 00:06:23.618 real 0m4.504s 00:06:23.618 user 0m3.933s 00:06:23.618 sys 0m0.575s 00:06:23.618 06:55:38 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.618 06:55:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.618 ************************************ 00:06:23.618 END TEST json_config_extra_key 00:06:23.618 ************************************ 00:06:23.618 06:55:38 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:23.618 06:55:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.618 06:55:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.618 06:55:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.876 ************************************ 00:06:23.876 START TEST alias_rpc 00:06:23.876 ************************************ 00:06:23.876 06:55:38 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:23.876 * Looking for test storage... 00:06:23.876 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:23.876 06:55:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:23.876 06:55:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1450807 00:06:23.876 06:55:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1450807 00:06:23.876 06:55:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.876 06:55:38 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1450807 ']' 00:06:23.876 06:55:38 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.876 06:55:38 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.876 06:55:38 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.876 06:55:38 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.876 06:55:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.876 [2024-07-24 06:55:38.470646] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:23.876 [2024-07-24 06:55:38.470760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450807 ] 00:06:24.134 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.134 [2024-07-24 06:55:38.616870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.392 [2024-07-24 06:55:38.821428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.324 06:55:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:25.324 06:55:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1450807 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1450807 ']' 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1450807 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1450807 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1450807' 00:06:25.324 killing process with pid 1450807 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@967 -- # kill 1450807 00:06:25.324 06:55:39 alias_rpc -- common/autotest_common.sh@972 -- # wait 1450807 00:06:27.851 00:06:27.851 real 0m4.004s 00:06:27.851 user 0m3.904s 00:06:27.851 sys 0m0.629s 00:06:27.851 06:55:42 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.851 06:55:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.851 ************************************ 00:06:27.851 END TEST alias_rpc 00:06:27.851 ************************************ 00:06:27.851 06:55:42 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:27.851 06:55:42 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:27.851 06:55:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.851 06:55:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.851 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:06:27.851 ************************************ 00:06:27.851 START TEST spdkcli_tcp 00:06:27.851 ************************************ 00:06:27.851 06:55:42 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:27.851 * Looking for test storage... 00:06:27.851 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:27.851 06:55:42 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.851 06:55:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1451523 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1451523 00:06:27.851 06:55:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:27.851 06:55:42 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1451523 ']' 00:06:27.851 06:55:42 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.851 06:55:42 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.851 06:55:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.851 06:55:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.851 06:55:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.109 [2024-07-24 06:55:42.537480] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:28.109 [2024-07-24 06:55:42.537585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1451523 ] 00:06:28.109 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.109 [2024-07-24 06:55:42.684687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.366 [2024-07-24 06:55:42.889744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.366 [2024-07-24 06:55:42.889761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.294 06:55:43 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.294 06:55:43 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:29.294 06:55:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1451682 00:06:29.294 06:55:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:29.294 06:55:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:29.294 [ 00:06:29.294 "bdev_malloc_delete", 00:06:29.294 "bdev_malloc_create", 00:06:29.294 "bdev_null_resize", 00:06:29.294 "bdev_null_delete", 00:06:29.294 "bdev_null_create", 00:06:29.294 "bdev_nvme_cuse_unregister", 00:06:29.294 "bdev_nvme_cuse_register", 00:06:29.294 "bdev_opal_new_user", 00:06:29.294 "bdev_opal_set_lock_state", 00:06:29.294 "bdev_opal_delete", 00:06:29.294 "bdev_opal_get_info", 00:06:29.294 "bdev_opal_create", 00:06:29.294 "bdev_nvme_opal_revert", 00:06:29.294 "bdev_nvme_opal_init", 00:06:29.294 "bdev_nvme_send_cmd", 00:06:29.294 "bdev_nvme_get_path_iostat", 00:06:29.294 "bdev_nvme_get_mdns_discovery_info", 00:06:29.294 "bdev_nvme_stop_mdns_discovery", 00:06:29.301 "bdev_nvme_start_mdns_discovery", 00:06:29.301 "bdev_nvme_set_multipath_policy", 00:06:29.301 "bdev_nvme_set_preferred_path", 00:06:29.301 "bdev_nvme_get_io_paths", 00:06:29.301 "bdev_nvme_remove_error_injection", 00:06:29.301 "bdev_nvme_add_error_injection", 00:06:29.301 "bdev_nvme_get_discovery_info", 00:06:29.301 "bdev_nvme_stop_discovery", 00:06:29.301 "bdev_nvme_start_discovery", 00:06:29.301 "bdev_nvme_get_controller_health_info", 00:06:29.301 "bdev_nvme_disable_controller", 00:06:29.301 "bdev_nvme_enable_controller", 00:06:29.301 "bdev_nvme_reset_controller", 00:06:29.301 "bdev_nvme_get_transport_statistics", 00:06:29.301 "bdev_nvme_apply_firmware", 00:06:29.301 "bdev_nvme_detach_controller", 00:06:29.301 "bdev_nvme_get_controllers", 00:06:29.301 "bdev_nvme_attach_controller", 00:06:29.301 "bdev_nvme_set_hotplug", 00:06:29.301 "bdev_nvme_set_options", 00:06:29.301 "bdev_passthru_delete", 00:06:29.301 "bdev_passthru_create", 00:06:29.301 "bdev_lvol_set_parent_bdev", 00:06:29.301 "bdev_lvol_set_parent", 00:06:29.301 "bdev_lvol_check_shallow_copy", 00:06:29.301 "bdev_lvol_start_shallow_copy", 00:06:29.301 "bdev_lvol_grow_lvstore", 00:06:29.301 "bdev_lvol_get_lvols", 00:06:29.301 "bdev_lvol_get_lvstores", 00:06:29.301 "bdev_lvol_delete", 00:06:29.301 "bdev_lvol_set_read_only", 00:06:29.301 "bdev_lvol_resize", 00:06:29.301 "bdev_lvol_decouple_parent", 00:06:29.301 "bdev_lvol_inflate", 00:06:29.301 "bdev_lvol_rename", 00:06:29.301 "bdev_lvol_clone_bdev", 00:06:29.301 "bdev_lvol_clone", 00:06:29.301 "bdev_lvol_snapshot", 00:06:29.301 "bdev_lvol_create", 00:06:29.301 "bdev_lvol_delete_lvstore", 00:06:29.301 "bdev_lvol_rename_lvstore", 00:06:29.301 "bdev_lvol_create_lvstore", 00:06:29.301 "bdev_raid_set_options", 00:06:29.301 "bdev_raid_remove_base_bdev", 00:06:29.301 "bdev_raid_add_base_bdev", 00:06:29.301 "bdev_raid_delete", 00:06:29.301 "bdev_raid_create", 00:06:29.301 "bdev_raid_get_bdevs", 00:06:29.301 "bdev_error_inject_error", 00:06:29.301 "bdev_error_delete", 00:06:29.301 "bdev_error_create", 00:06:29.301 "bdev_split_delete", 00:06:29.301 "bdev_split_create", 00:06:29.301 "bdev_delay_delete", 00:06:29.301 "bdev_delay_create", 00:06:29.301 "bdev_delay_update_latency", 00:06:29.301 "bdev_zone_block_delete", 00:06:29.301 "bdev_zone_block_create", 00:06:29.301 "blobfs_create", 00:06:29.301 "blobfs_detect", 00:06:29.301 "blobfs_set_cache_size", 00:06:29.301 "bdev_aio_delete", 00:06:29.301 "bdev_aio_rescan", 00:06:29.301 "bdev_aio_create", 00:06:29.301 "bdev_ftl_set_property", 00:06:29.301 "bdev_ftl_get_properties", 00:06:29.301 "bdev_ftl_get_stats", 00:06:29.301 "bdev_ftl_unmap", 00:06:29.301 "bdev_ftl_unload", 00:06:29.301 "bdev_ftl_delete", 00:06:29.301 "bdev_ftl_load", 00:06:29.301 "bdev_ftl_create", 00:06:29.301 "bdev_virtio_attach_controller", 00:06:29.301 "bdev_virtio_scsi_get_devices", 00:06:29.301 "bdev_virtio_detach_controller", 00:06:29.301 "bdev_virtio_blk_set_hotplug", 00:06:29.301 "bdev_iscsi_delete", 00:06:29.301 "bdev_iscsi_create", 00:06:29.301 "bdev_iscsi_set_options", 00:06:29.301 "accel_error_inject_error", 00:06:29.301 "ioat_scan_accel_module", 00:06:29.301 "dsa_scan_accel_module", 00:06:29.301 "iaa_scan_accel_module", 00:06:29.301 "keyring_file_remove_key", 00:06:29.301 "keyring_file_add_key", 00:06:29.301 "keyring_linux_set_options", 00:06:29.301 "iscsi_get_histogram", 00:06:29.301 "iscsi_enable_histogram", 00:06:29.301 "iscsi_set_options", 00:06:29.301 "iscsi_get_auth_groups", 00:06:29.301 "iscsi_auth_group_remove_secret", 00:06:29.301 "iscsi_auth_group_add_secret", 00:06:29.301 "iscsi_delete_auth_group", 00:06:29.301 "iscsi_create_auth_group", 00:06:29.301 "iscsi_set_discovery_auth", 00:06:29.301 "iscsi_get_options", 00:06:29.301 "iscsi_target_node_request_logout", 00:06:29.301 "iscsi_target_node_set_redirect", 00:06:29.301 "iscsi_target_node_set_auth", 00:06:29.301 "iscsi_target_node_add_lun", 00:06:29.301 "iscsi_get_stats", 00:06:29.301 "iscsi_get_connections", 00:06:29.301 "iscsi_portal_group_set_auth", 00:06:29.301 "iscsi_start_portal_group", 00:06:29.301 "iscsi_delete_portal_group", 00:06:29.301 "iscsi_create_portal_group", 00:06:29.301 "iscsi_get_portal_groups", 00:06:29.301 "iscsi_delete_target_node", 00:06:29.301 "iscsi_target_node_remove_pg_ig_maps", 00:06:29.301 "iscsi_target_node_add_pg_ig_maps", 00:06:29.301 "iscsi_create_target_node", 00:06:29.301 "iscsi_get_target_nodes", 00:06:29.301 "iscsi_delete_initiator_group", 00:06:29.301 "iscsi_initiator_group_remove_initiators", 00:06:29.302 "iscsi_initiator_group_add_initiators", 00:06:29.302 "iscsi_create_initiator_group", 00:06:29.302 "iscsi_get_initiator_groups", 00:06:29.302 "nvmf_set_crdt", 00:06:29.302 "nvmf_set_config", 00:06:29.302 "nvmf_set_max_subsystems", 00:06:29.302 "nvmf_stop_mdns_prr", 00:06:29.302 "nvmf_publish_mdns_prr", 00:06:29.302 "nvmf_subsystem_get_listeners", 00:06:29.302 "nvmf_subsystem_get_qpairs", 00:06:29.302 "nvmf_subsystem_get_controllers", 00:06:29.302 "nvmf_get_stats", 00:06:29.302 "nvmf_get_transports", 00:06:29.302 "nvmf_create_transport", 00:06:29.302 "nvmf_get_targets", 00:06:29.302 "nvmf_delete_target", 00:06:29.302 "nvmf_create_target", 00:06:29.302 "nvmf_subsystem_allow_any_host", 00:06:29.302 "nvmf_subsystem_remove_host", 00:06:29.302 "nvmf_subsystem_add_host", 00:06:29.302 "nvmf_ns_remove_host", 00:06:29.302 "nvmf_ns_add_host", 00:06:29.302 "nvmf_subsystem_remove_ns", 00:06:29.302 "nvmf_subsystem_add_ns", 00:06:29.302 "nvmf_subsystem_listener_set_ana_state", 00:06:29.302 "nvmf_discovery_get_referrals", 00:06:29.302 "nvmf_discovery_remove_referral", 00:06:29.302 "nvmf_discovery_add_referral", 00:06:29.302 "nvmf_subsystem_remove_listener", 00:06:29.302 "nvmf_subsystem_add_listener", 00:06:29.302 "nvmf_delete_subsystem", 00:06:29.302 "nvmf_create_subsystem", 00:06:29.302 "nvmf_get_subsystems", 00:06:29.302 "env_dpdk_get_mem_stats", 00:06:29.302 "nbd_get_disks", 00:06:29.302 "nbd_stop_disk", 00:06:29.302 "nbd_start_disk", 00:06:29.302 "ublk_recover_disk", 00:06:29.302 "ublk_get_disks", 00:06:29.302 "ublk_stop_disk", 00:06:29.302 "ublk_start_disk", 00:06:29.302 "ublk_destroy_target", 00:06:29.302 "ublk_create_target", 00:06:29.302 "virtio_blk_create_transport", 00:06:29.302 "virtio_blk_get_transports", 00:06:29.302 "vhost_controller_set_coalescing", 00:06:29.302 "vhost_get_controllers", 00:06:29.302 "vhost_delete_controller", 00:06:29.302 "vhost_create_blk_controller", 00:06:29.302 "vhost_scsi_controller_remove_target", 00:06:29.302 "vhost_scsi_controller_add_target", 00:06:29.302 "vhost_start_scsi_controller", 00:06:29.302 "vhost_create_scsi_controller", 00:06:29.302 "thread_set_cpumask", 00:06:29.302 "framework_get_governor", 00:06:29.302 "framework_get_scheduler", 00:06:29.302 "framework_set_scheduler", 00:06:29.302 "framework_get_reactors", 00:06:29.302 "thread_get_io_channels", 00:06:29.302 "thread_get_pollers", 00:06:29.302 "thread_get_stats", 00:06:29.302 "framework_monitor_context_switch", 00:06:29.302 "spdk_kill_instance", 00:06:29.302 "log_enable_timestamps", 00:06:29.302 "log_get_flags", 00:06:29.302 "log_clear_flag", 00:06:29.302 "log_set_flag", 00:06:29.302 "log_get_level", 00:06:29.302 "log_set_level", 00:06:29.302 "log_get_print_level", 00:06:29.302 "log_set_print_level", 00:06:29.302 "framework_enable_cpumask_locks", 00:06:29.302 "framework_disable_cpumask_locks", 00:06:29.302 "framework_wait_init", 00:06:29.302 "framework_start_init", 00:06:29.302 "scsi_get_devices", 00:06:29.302 "bdev_get_histogram", 00:06:29.302 "bdev_enable_histogram", 00:06:29.302 "bdev_set_qos_limit", 00:06:29.302 "bdev_set_qd_sampling_period", 00:06:29.302 "bdev_get_bdevs", 00:06:29.302 "bdev_reset_iostat", 00:06:29.302 "bdev_get_iostat", 00:06:29.302 "bdev_examine", 00:06:29.302 "bdev_wait_for_examine", 00:06:29.302 "bdev_set_options", 00:06:29.302 "notify_get_notifications", 00:06:29.302 "notify_get_types", 00:06:29.302 "accel_get_stats", 00:06:29.302 "accel_set_options", 00:06:29.302 "accel_set_driver", 00:06:29.302 "accel_crypto_key_destroy", 00:06:29.302 "accel_crypto_keys_get", 00:06:29.302 "accel_crypto_key_create", 00:06:29.302 "accel_assign_opc", 00:06:29.302 "accel_get_module_info", 00:06:29.302 "accel_get_opc_assignments", 00:06:29.302 "vmd_rescan", 00:06:29.302 "vmd_remove_device", 00:06:29.302 "vmd_enable", 00:06:29.302 "sock_get_default_impl", 00:06:29.302 "sock_set_default_impl", 00:06:29.302 "sock_impl_set_options", 00:06:29.302 "sock_impl_get_options", 00:06:29.302 "iobuf_get_stats", 00:06:29.302 "iobuf_set_options", 00:06:29.302 "framework_get_pci_devices", 00:06:29.302 "framework_get_config", 00:06:29.302 "framework_get_subsystems", 00:06:29.302 "trace_get_info", 00:06:29.302 "trace_get_tpoint_group_mask", 00:06:29.302 "trace_disable_tpoint_group", 00:06:29.302 "trace_enable_tpoint_group", 00:06:29.302 "trace_clear_tpoint_mask", 00:06:29.302 "trace_set_tpoint_mask", 00:06:29.302 "keyring_get_keys", 00:06:29.302 "spdk_get_version", 00:06:29.302 "rpc_get_methods" 00:06:29.302 ] 00:06:29.559 06:55:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:29.559 06:55:43 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.559 06:55:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.559 06:55:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:29.559 06:55:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1451523 00:06:29.559 06:55:43 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1451523 ']' 00:06:29.559 06:55:43 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1451523 00:06:29.559 06:55:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:29.559 06:55:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.559 06:55:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1451523 00:06:29.559 06:55:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.559 06:55:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.559 06:55:44 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1451523' 00:06:29.559 killing process with pid 1451523 00:06:29.559 06:55:44 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1451523 00:06:29.559 06:55:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1451523 00:06:32.120 00:06:32.120 real 0m4.073s 00:06:32.120 user 0m7.135s 00:06:32.120 sys 0m0.662s 00:06:32.120 06:55:46 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.120 06:55:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.120 ************************************ 00:06:32.120 END TEST spdkcli_tcp 00:06:32.120 ************************************ 00:06:32.120 06:55:46 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.120 06:55:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.120 06:55:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.120 06:55:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.120 ************************************ 00:06:32.120 START TEST dpdk_mem_utility 00:06:32.120 ************************************ 00:06:32.120 06:55:46 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.120 * Looking for test storage... 00:06:32.120 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:32.120 06:55:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:32.120 06:55:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1452282 00:06:32.120 06:55:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1452282 00:06:32.120 06:55:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.120 06:55:46 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1452282 ']' 00:06:32.120 06:55:46 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.120 06:55:46 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.120 06:55:46 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.120 06:55:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.120 06:55:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:32.120 [2024-07-24 06:55:46.717754] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:32.120 [2024-07-24 06:55:46.717856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1452282 ] 00:06:32.382 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.382 [2024-07-24 06:55:46.866446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.640 [2024-07-24 06:55:47.079941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.573 06:55:47 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.573 06:55:47 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:33.573 06:55:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:33.573 06:55:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:33.573 06:55:47 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.573 06:55:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.573 { 00:06:33.573 "filename": "/tmp/spdk_mem_dump.txt" 00:06:33.573 } 00:06:33.573 06:55:47 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.573 06:55:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.573 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:33.573 1 heaps totaling size 820.000000 MiB 00:06:33.573 size: 820.000000 MiB heap id: 0 00:06:33.573 end heaps---------- 00:06:33.573 8 mempools totaling size 598.116089 MiB 00:06:33.573 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:33.573 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:33.573 size: 84.521057 MiB name: bdev_io_1452282 00:06:33.573 size: 51.011292 MiB name: evtpool_1452282 00:06:33.573 size: 50.003479 MiB name: msgpool_1452282 00:06:33.573 size: 21.763794 MiB name: PDU_Pool 00:06:33.573 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:33.573 size: 0.026123 MiB name: Session_Pool 00:06:33.573 end mempools------- 00:06:33.573 6 memzones totaling size 4.142822 MiB 00:06:33.573 size: 1.000366 MiB name: RG_ring_0_1452282 00:06:33.573 size: 1.000366 MiB name: RG_ring_1_1452282 00:06:33.573 size: 1.000366 MiB name: RG_ring_4_1452282 00:06:33.573 size: 1.000366 MiB name: RG_ring_5_1452282 00:06:33.573 size: 0.125366 MiB name: RG_ring_2_1452282 00:06:33.573 size: 0.015991 MiB name: RG_ring_3_1452282 00:06:33.573 end memzones------- 00:06:33.573 06:55:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:33.573 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:33.573 list of free elements. size: 18.514832 MiB 00:06:33.573 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:33.573 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:33.573 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:33.573 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:33.573 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:33.573 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:33.573 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:33.573 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:33.573 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:33.573 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:33.573 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:33.573 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:33.573 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:33.573 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:33.573 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:33.573 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:33.573 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:33.573 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:33.573 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:33.573 list of standard malloc elements. size: 199.220764 MiB 00:06:33.573 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:33.573 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:33.573 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:33.573 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:33.573 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:33.573 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:33.573 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:33.573 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:33.573 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:33.573 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:33.573 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:33.573 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:33.573 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:33.573 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:33.573 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:33.573 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:33.573 list of memzone associated elements. size: 602.264404 MiB 00:06:33.573 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:33.573 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:33.573 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:33.573 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:33.573 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:33.573 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1452282_0 00:06:33.573 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:33.573 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1452282_0 00:06:33.573 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:33.573 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1452282_0 00:06:33.573 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:33.573 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:33.573 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:33.573 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:33.573 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:33.573 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1452282 00:06:33.573 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:33.574 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1452282 00:06:33.574 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:33.574 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1452282 00:06:33.574 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:33.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:33.574 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:33.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:33.574 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:33.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:33.574 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:33.574 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:33.574 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:33.574 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1452282 00:06:33.574 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:33.574 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1452282 00:06:33.574 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:33.574 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1452282 00:06:33.574 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:33.574 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1452282 00:06:33.574 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:33.574 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1452282 00:06:33.574 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:33.574 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:33.574 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:33.574 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:33.574 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:33.574 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:33.574 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:33.574 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1452282 00:06:33.574 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:33.574 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:33.574 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:33.574 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:33.574 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:33.574 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1452282 00:06:33.574 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:33.574 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:33.574 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:33.574 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1452282 00:06:33.574 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:33.574 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1452282 00:06:33.574 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:33.574 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:33.574 06:55:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:33.574 06:55:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1452282 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1452282 ']' 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1452282 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1452282 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1452282' 00:06:33.574 killing process with pid 1452282 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1452282 00:06:33.574 06:55:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1452282 00:06:36.112 00:06:36.112 real 0m3.932s 00:06:36.112 user 0m3.793s 00:06:36.112 sys 0m0.605s 00:06:36.112 06:55:50 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.112 06:55:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.112 ************************************ 00:06:36.112 END TEST dpdk_mem_utility 00:06:36.112 ************************************ 00:06:36.112 06:55:50 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:36.112 06:55:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.112 06:55:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.112 06:55:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.112 ************************************ 00:06:36.112 START TEST event 00:06:36.112 ************************************ 00:06:36.112 06:55:50 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:36.113 * Looking for test storage... 00:06:36.113 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:36.113 06:55:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:36.113 06:55:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:36.113 06:55:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.113 06:55:50 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:36.113 06:55:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.113 06:55:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.113 ************************************ 00:06:36.113 START TEST event_perf 00:06:36.113 ************************************ 00:06:36.113 06:55:50 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.113 Running I/O for 1 seconds...[2024-07-24 06:55:50.721153] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:36.113 [2024-07-24 06:55:50.721238] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1453140 ] 00:06:36.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.370 [2024-07-24 06:55:50.867218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.627 [2024-07-24 06:55:51.078614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.627 [2024-07-24 06:55:51.078687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.627 [2024-07-24 06:55:51.078715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.627 [2024-07-24 06:55:51.078725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.001 Running I/O for 1 seconds... 00:06:38.001 lcore 0: 208078 00:06:38.001 lcore 1: 208075 00:06:38.001 lcore 2: 208078 00:06:38.001 lcore 3: 208078 00:06:38.001 done. 00:06:38.001 00:06:38.001 real 0m1.795s 00:06:38.001 user 0m4.614s 00:06:38.001 sys 0m0.174s 00:06:38.001 06:55:52 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.001 06:55:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.001 ************************************ 00:06:38.001 END TEST event_perf 00:06:38.001 ************************************ 00:06:38.001 06:55:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:38.001 06:55:52 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:38.001 06:55:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.001 06:55:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.001 ************************************ 00:06:38.001 START TEST event_reactor 00:06:38.001 ************************************ 00:06:38.001 06:55:52 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:38.001 [2024-07-24 06:55:52.602500] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:38.001 [2024-07-24 06:55:52.602582] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1453437 ] 00:06:38.259 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.259 [2024-07-24 06:55:52.745147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.517 [2024-07-24 06:55:52.943809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.887 test_start 00:06:39.887 oneshot 00:06:39.887 tick 100 00:06:39.887 tick 100 00:06:39.887 tick 250 00:06:39.887 tick 100 00:06:39.887 tick 100 00:06:39.887 tick 100 00:06:39.887 tick 250 00:06:39.887 tick 500 00:06:39.887 tick 100 00:06:39.887 tick 100 00:06:39.887 tick 250 00:06:39.887 tick 100 00:06:39.887 tick 100 00:06:39.887 test_end 00:06:39.887 00:06:39.887 real 0m1.783s 00:06:39.887 user 0m1.598s 00:06:39.887 sys 0m0.178s 00:06:39.887 06:55:54 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.887 06:55:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:39.887 ************************************ 00:06:39.887 END TEST event_reactor 00:06:39.887 ************************************ 00:06:39.887 06:55:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.887 06:55:54 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.887 06:55:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.887 06:55:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.887 ************************************ 00:06:39.887 START TEST event_reactor_perf 00:06:39.887 ************************************ 00:06:39.887 06:55:54 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.887 [2024-07-24 06:55:54.465777] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:39.887 [2024-07-24 06:55:54.465894] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1453726 ] 00:06:40.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.145 [2024-07-24 06:55:54.610498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.402 [2024-07-24 06:55:54.817378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.776 test_start 00:06:41.776 test_end 00:06:41.776 Performance: 405925 events per second 00:06:41.776 00:06:41.776 real 0m1.792s 00:06:41.776 user 0m1.605s 00:06:41.776 sys 0m0.179s 00:06:41.776 06:55:56 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.776 06:55:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.776 ************************************ 00:06:41.776 END TEST event_reactor_perf 00:06:41.776 ************************************ 00:06:41.776 06:55:56 event -- event/event.sh@49 -- # uname -s 00:06:41.776 06:55:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:41.776 06:55:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:41.776 06:55:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.776 06:55:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.776 06:55:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.776 ************************************ 00:06:41.776 START TEST event_scheduler 00:06:41.776 ************************************ 00:06:41.776 06:55:56 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:41.776 * Looking for test storage... 00:06:41.776 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:41.776 06:55:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:41.776 06:55:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:41.776 06:55:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1454220 00:06:41.776 06:55:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.776 06:55:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1454220 00:06:41.776 06:55:56 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1454220 ']' 00:06:41.776 06:55:56 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.776 06:55:56 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.776 06:55:56 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.776 06:55:56 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.776 06:55:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.034 [2024-07-24 06:55:56.457088] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:42.034 [2024-07-24 06:55:56.457205] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454220 ] 00:06:42.034 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.034 [2024-07-24 06:55:56.601059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.293 [2024-07-24 06:55:56.807209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.293 [2024-07-24 06:55:56.807273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.293 [2024-07-24 06:55:56.807291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.293 [2024-07-24 06:55:56.807300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.858 06:55:57 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.858 06:55:57 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:42.858 06:55:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:42.858 06:55:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.858 06:55:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.858 [2024-07-24 06:55:57.253498] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:42.858 [2024-07-24 06:55:57.253527] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:42.858 [2024-07-24 06:55:57.253546] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:42.858 [2024-07-24 06:55:57.253559] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:42.858 [2024-07-24 06:55:57.253572] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:42.858 06:55:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.858 06:55:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:42.858 06:55:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.858 06:55:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.116 [2024-07-24 06:55:57.587644] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:43.116 06:55:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:43.117 06:55:57 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.117 06:55:57 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 ************************************ 00:06:43.117 START TEST scheduler_create_thread 00:06:43.117 ************************************ 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 2 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 3 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 4 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 5 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 6 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 7 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 8 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 9 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 10 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.117 06:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.489 06:55:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.489 06:55:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:44.489 06:55:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:44.489 06:55:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.489 06:55:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.422 06:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.422 00:06:45.422 real 0m2.145s 00:06:45.422 user 0m0.015s 00:06:45.422 sys 0m0.005s 00:06:45.422 06:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.422 06:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.422 ************************************ 00:06:45.422 END TEST scheduler_create_thread 00:06:45.422 ************************************ 00:06:45.422 06:55:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:45.422 06:55:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1454220 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1454220 ']' 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1454220 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1454220 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1454220' 00:06:45.422 killing process with pid 1454220 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1454220 00:06:45.422 06:55:59 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1454220 00:06:45.680 [2024-07-24 06:56:00.250911] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:47.054 00:06:47.054 real 0m5.244s 00:06:47.054 user 0m8.423s 00:06:47.054 sys 0m0.538s 00:06:47.054 06:56:01 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.054 06:56:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.054 ************************************ 00:06:47.054 END TEST event_scheduler 00:06:47.054 ************************************ 00:06:47.054 06:56:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:47.054 06:56:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:47.054 06:56:01 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.054 06:56:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.054 06:56:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.054 ************************************ 00:06:47.054 START TEST app_repeat 00:06:47.054 ************************************ 00:06:47.054 06:56:01 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1455151 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1455151' 00:06:47.054 Process app_repeat pid: 1455151 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:47.054 spdk_app_start Round 0 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1455151 /var/tmp/spdk-nbd.sock 00:06:47.054 06:56:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1455151 ']' 00:06:47.054 06:56:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.054 06:56:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:47.054 06:56:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.054 06:56:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.054 06:56:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.054 06:56:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.054 [2024-07-24 06:56:01.680881] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:47.054 [2024-07-24 06:56:01.680984] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455151 ] 00:06:47.312 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.312 [2024-07-24 06:56:01.828229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.570 [2024-07-24 06:56:02.027578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.570 [2024-07-24 06:56:02.027594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.184 06:56:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.184 06:56:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:48.184 06:56:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.184 Malloc0 00:06:48.184 06:56:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.448 Malloc1 00:06:48.449 06:56:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.449 06:56:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:48.706 /dev/nbd0 00:06:48.706 06:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.706 06:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.706 1+0 records in 00:06:48.706 1+0 records out 00:06:48.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229716 s, 17.8 MB/s 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.706 06:56:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:48.706 06:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.706 06:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.706 06:56:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:48.962 /dev/nbd1 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.962 1+0 records in 00:06:48.962 1+0 records out 00:06:48.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203005 s, 20.2 MB/s 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.962 06:56:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:48.962 { 00:06:48.962 "nbd_device": "/dev/nbd0", 00:06:48.962 "bdev_name": "Malloc0" 00:06:48.962 }, 00:06:48.962 { 00:06:48.962 "nbd_device": "/dev/nbd1", 00:06:48.962 "bdev_name": "Malloc1" 00:06:48.962 } 00:06:48.962 ]' 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:48.962 { 00:06:48.962 "nbd_device": "/dev/nbd0", 00:06:48.962 "bdev_name": "Malloc0" 00:06:48.962 }, 00:06:48.962 { 00:06:48.962 "nbd_device": "/dev/nbd1", 00:06:48.962 "bdev_name": "Malloc1" 00:06:48.962 } 00:06:48.962 ]' 00:06:48.962 06:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.218 /dev/nbd1' 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.218 /dev/nbd1' 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.218 256+0 records in 00:06:49.218 256+0 records out 00:06:49.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114481 s, 91.6 MB/s 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.218 256+0 records in 00:06:49.218 256+0 records out 00:06:49.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157291 s, 66.7 MB/s 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.218 256+0 records in 00:06:49.218 256+0 records out 00:06:49.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182975 s, 57.3 MB/s 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.218 06:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.219 06:56:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.476 06:56:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.476 06:56:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.733 06:56:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.733 06:56:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.298 06:56:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.669 [2024-07-24 06:56:06.009803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.669 [2024-07-24 06:56:06.202808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.669 [2024-07-24 06:56:06.202808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.927 [2024-07-24 06:56:06.422236] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.927 [2024-07-24 06:56:06.422281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.296 06:56:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.296 06:56:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:53.296 spdk_app_start Round 1 00:06:53.296 06:56:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1455151 /var/tmp/spdk-nbd.sock 00:06:53.296 06:56:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1455151 ']' 00:06:53.296 06:56:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.296 06:56:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.296 06:56:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.296 06:56:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.296 06:56:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.296 06:56:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.296 06:56:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:53.296 06:56:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.553 Malloc0 00:06:53.553 06:56:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.811 Malloc1 00:06:53.811 06:56:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.811 06:56:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.811 /dev/nbd0 00:06:54.068 06:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.068 06:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.068 1+0 records in 00:06:54.068 1+0 records out 00:06:54.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023765 s, 17.2 MB/s 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.068 06:56:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.068 06:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.068 06:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.068 06:56:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.068 /dev/nbd1 00:06:54.068 06:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.069 06:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.069 1+0 records in 00:06:54.069 1+0 records out 00:06:54.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243235 s, 16.8 MB/s 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.069 06:56:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.069 06:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.069 06:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.069 06:56:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.069 06:56:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.069 06:56:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.325 06:56:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.325 { 00:06:54.325 "nbd_device": "/dev/nbd0", 00:06:54.325 "bdev_name": "Malloc0" 00:06:54.325 }, 00:06:54.325 { 00:06:54.325 "nbd_device": "/dev/nbd1", 00:06:54.325 "bdev_name": "Malloc1" 00:06:54.325 } 00:06:54.325 ]' 00:06:54.325 06:56:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.325 { 00:06:54.325 "nbd_device": "/dev/nbd0", 00:06:54.325 "bdev_name": "Malloc0" 00:06:54.325 }, 00:06:54.325 { 00:06:54.325 "nbd_device": "/dev/nbd1", 00:06:54.325 "bdev_name": "Malloc1" 00:06:54.325 } 00:06:54.325 ]' 00:06:54.325 06:56:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.325 06:56:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.325 /dev/nbd1' 00:06:54.325 06:56:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.325 /dev/nbd1' 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.326 256+0 records in 00:06:54.326 256+0 records out 00:06:54.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114677 s, 91.4 MB/s 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.326 06:56:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.582 256+0 records in 00:06:54.582 256+0 records out 00:06:54.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153089 s, 68.5 MB/s 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.582 256+0 records in 00:06:54.582 256+0 records out 00:06:54.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183792 s, 57.1 MB/s 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.582 06:56:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.582 06:56:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.582 06:56:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.583 06:56:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.839 06:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.095 06:56:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.095 06:56:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.352 06:56:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:56.720 [2024-07-24 06:56:11.335753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.977 [2024-07-24 06:56:11.534219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.977 [2024-07-24 06:56:11.534226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.234 [2024-07-24 06:56:11.755402] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:57.234 [2024-07-24 06:56:11.755451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.603 06:56:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.603 06:56:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:58.603 spdk_app_start Round 2 00:06:58.603 06:56:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1455151 /var/tmp/spdk-nbd.sock 00:06:58.603 06:56:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1455151 ']' 00:06:58.603 06:56:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.603 06:56:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.603 06:56:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.603 06:56:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.603 06:56:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.603 06:56:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.603 06:56:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:58.603 06:56:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.867 Malloc0 00:06:58.867 06:56:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.124 Malloc1 00:06:59.124 06:56:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.124 06:56:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.381 /dev/nbd0 00:06:59.381 06:56:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.381 06:56:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.381 1+0 records in 00:06:59.381 1+0 records out 00:06:59.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237221 s, 17.3 MB/s 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:59.381 06:56:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:59.381 06:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.381 06:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.381 06:56:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.381 /dev/nbd1 00:06:59.381 06:56:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.381 06:56:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.381 06:56:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:59.381 06:56:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:59.381 06:56:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:59.381 06:56:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:59.381 06:56:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:59.381 06:56:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:59.381 06:56:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:59.381 06:56:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:59.639 06:56:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.639 1+0 records in 00:06:59.639 1+0 records out 00:06:59.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249087 s, 16.4 MB/s 00:06:59.639 06:56:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:59.639 06:56:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:59.639 06:56:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:59.639 06:56:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:59.639 06:56:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.639 { 00:06:59.639 "nbd_device": "/dev/nbd0", 00:06:59.639 "bdev_name": "Malloc0" 00:06:59.639 }, 00:06:59.639 { 00:06:59.639 "nbd_device": "/dev/nbd1", 00:06:59.639 "bdev_name": "Malloc1" 00:06:59.639 } 00:06:59.639 ]' 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.639 { 00:06:59.639 "nbd_device": "/dev/nbd0", 00:06:59.639 "bdev_name": "Malloc0" 00:06:59.639 }, 00:06:59.639 { 00:06:59.639 "nbd_device": "/dev/nbd1", 00:06:59.639 "bdev_name": "Malloc1" 00:06:59.639 } 00:06:59.639 ]' 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.639 /dev/nbd1' 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.639 /dev/nbd1' 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.639 256+0 records in 00:06:59.639 256+0 records out 00:06:59.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111852 s, 93.7 MB/s 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.639 06:56:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.897 256+0 records in 00:06:59.897 256+0 records out 00:06:59.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154332 s, 67.9 MB/s 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.897 256+0 records in 00:06:59.897 256+0 records out 00:06:59.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181375 s, 57.8 MB/s 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.897 06:56:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.155 06:56:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.412 06:56:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.412 06:56:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.670 06:56:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.044 [2024-07-24 06:56:16.652245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.348 [2024-07-24 06:56:16.852713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.348 [2024-07-24 06:56:16.852713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.607 [2024-07-24 06:56:17.071004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.607 [2024-07-24 06:56:17.071062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.978 06:56:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1455151 /var/tmp/spdk-nbd.sock 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1455151 ']' 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:03.978 06:56:18 event.app_repeat -- event/event.sh@39 -- # killprocess 1455151 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1455151 ']' 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1455151 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1455151 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1455151' 00:07:03.978 killing process with pid 1455151 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1455151 00:07:03.978 06:56:18 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1455151 00:07:05.350 spdk_app_start is called in Round 0. 00:07:05.350 Shutdown signal received, stop current app iteration 00:07:05.350 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 reinitialization... 00:07:05.350 spdk_app_start is called in Round 1. 00:07:05.350 Shutdown signal received, stop current app iteration 00:07:05.350 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 reinitialization... 00:07:05.350 spdk_app_start is called in Round 2. 00:07:05.350 Shutdown signal received, stop current app iteration 00:07:05.350 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 reinitialization... 00:07:05.350 spdk_app_start is called in Round 3. 00:07:05.350 Shutdown signal received, stop current app iteration 00:07:05.350 06:56:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:05.350 06:56:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:05.350 00:07:05.350 real 0m18.087s 00:07:05.350 user 0m36.082s 00:07:05.350 sys 0m3.127s 00:07:05.350 06:56:19 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.350 06:56:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.350 ************************************ 00:07:05.350 END TEST app_repeat 00:07:05.350 ************************************ 00:07:05.350 06:56:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:05.350 06:56:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:05.350 06:56:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.350 06:56:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.350 06:56:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.350 ************************************ 00:07:05.350 START TEST cpu_locks 00:07:05.350 ************************************ 00:07:05.350 06:56:19 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:05.350 * Looking for test storage... 00:07:05.350 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:05.350 06:56:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:05.350 06:56:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:05.350 06:56:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:05.350 06:56:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:05.350 06:56:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.350 06:56:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.350 06:56:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.350 ************************************ 00:07:05.350 START TEST default_locks 00:07:05.350 ************************************ 00:07:05.350 06:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:05.350 06:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1458569 00:07:05.350 06:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1458569 00:07:05.350 06:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.350 06:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1458569 ']' 00:07:05.350 06:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.351 06:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.351 06:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.351 06:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.351 06:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.608 [2024-07-24 06:56:20.009166] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:05.608 [2024-07-24 06:56:20.009265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458569 ] 00:07:05.608 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.608 [2024-07-24 06:56:20.163648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.865 [2024-07-24 06:56:20.378149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.798 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.798 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:06.798 06:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1458569 00:07:06.799 06:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1458569 00:07:06.799 06:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.056 lslocks: write error 00:07:07.056 06:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1458569 00:07:07.056 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1458569 ']' 00:07:07.056 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1458569 00:07:07.056 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:07.056 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.056 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1458569 00:07:07.056 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.057 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.057 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1458569' 00:07:07.057 killing process with pid 1458569 00:07:07.057 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1458569 00:07:07.057 06:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1458569 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1458569 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1458569 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1458569 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1458569 ']' 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.584 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1458569) - No such process 00:07:09.584 ERROR: process (pid: 1458569) is no longer running 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:09.584 00:07:09.584 real 0m3.998s 00:07:09.584 user 0m3.869s 00:07:09.584 sys 0m0.720s 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.584 06:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.584 ************************************ 00:07:09.584 END TEST default_locks 00:07:09.584 ************************************ 00:07:09.584 06:56:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:09.584 06:56:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.584 06:56:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.584 06:56:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.584 ************************************ 00:07:09.584 START TEST default_locks_via_rpc 00:07:09.584 ************************************ 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1459331 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1459331 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1459331 ']' 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.584 06:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.584 [2024-07-24 06:56:24.082322] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:09.584 [2024-07-24 06:56:24.082421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459331 ] 00:07:09.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.843 [2024-07-24 06:56:24.228163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.843 [2024-07-24 06:56:24.425174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1459331 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1459331 00:07:10.777 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1459331 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1459331 ']' 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1459331 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1459331 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1459331' 00:07:11.343 killing process with pid 1459331 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1459331 00:07:11.343 06:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1459331 00:07:13.873 00:07:13.873 real 0m4.092s 00:07:13.873 user 0m3.991s 00:07:13.873 sys 0m0.728s 00:07:13.873 06:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.873 06:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.873 ************************************ 00:07:13.873 END TEST default_locks_via_rpc 00:07:13.873 ************************************ 00:07:13.873 06:56:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:13.873 06:56:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.873 06:56:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.873 06:56:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.873 ************************************ 00:07:13.873 START TEST non_locking_app_on_locked_coremask 00:07:13.873 ************************************ 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1459974 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1459974 /var/tmp/spdk.sock 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1459974 ']' 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.873 06:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.873 [2024-07-24 06:56:28.247557] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:13.873 [2024-07-24 06:56:28.247659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459974 ] 00:07:13.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.873 [2024-07-24 06:56:28.396718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.132 [2024-07-24 06:56:28.605297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1460242 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1460242 /var/tmp/spdk2.sock 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1460242 ']' 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.066 06:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:15.066 [2024-07-24 06:56:29.550105] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:15.066 [2024-07-24 06:56:29.550216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460242 ] 00:07:15.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.324 [2024-07-24 06:56:29.746494] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.324 [2024-07-24 06:56:29.746539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.582 [2024-07-24 06:56:30.177147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.481 06:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.481 06:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:17.481 06:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1459974 00:07:17.481 06:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1459974 00:07:17.481 06:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.413 lslocks: write error 00:07:18.413 06:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1459974 00:07:18.413 06:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1459974 ']' 00:07:18.413 06:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1459974 00:07:18.413 06:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:18.413 06:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.413 06:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1459974 00:07:18.413 06:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.413 06:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.413 06:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1459974' 00:07:18.413 killing process with pid 1459974 00:07:18.413 06:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1459974 00:07:18.413 06:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1459974 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1460242 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1460242 ']' 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1460242 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1460242 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1460242' 00:07:23.718 killing process with pid 1460242 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1460242 00:07:23.718 06:56:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1460242 00:07:25.615 00:07:25.615 real 0m11.929s 00:07:25.615 user 0m12.043s 00:07:25.615 sys 0m1.567s 00:07:25.615 06:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.615 06:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.615 ************************************ 00:07:25.615 END TEST non_locking_app_on_locked_coremask 00:07:25.615 ************************************ 00:07:25.615 06:56:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:25.615 06:56:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.615 06:56:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.615 06:56:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.615 ************************************ 00:07:25.615 START TEST locking_app_on_unlocked_coremask 00:07:25.615 ************************************ 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1462139 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1462139 /var/tmp/spdk.sock 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1462139 ']' 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.615 06:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:25.873 [2024-07-24 06:56:40.253436] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:25.873 [2024-07-24 06:56:40.253532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462139 ] 00:07:25.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.873 [2024-07-24 06:56:40.401899] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.873 [2024-07-24 06:56:40.401938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.130 [2024-07-24 06:56:40.595465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1462411 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1462411 /var/tmp/spdk2.sock 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1462411 ']' 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.062 06:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.062 [2024-07-24 06:56:41.553279] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:27.062 [2024-07-24 06:56:41.553381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462411 ] 00:07:27.062 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.319 [2024-07-24 06:56:41.752880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.577 [2024-07-24 06:56:42.167519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.476 06:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.476 06:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:29.476 06:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1462411 00:07:29.476 06:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1462411 00:07:29.476 06:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.846 lslocks: write error 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1462139 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1462139 ']' 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1462139 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1462139 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1462139' 00:07:30.846 killing process with pid 1462139 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1462139 00:07:30.846 06:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1462139 00:07:36.103 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1462411 00:07:36.103 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1462411 ']' 00:07:36.103 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1462411 00:07:36.103 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:36.104 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.104 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1462411 00:07:36.104 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.104 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.104 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1462411' 00:07:36.104 killing process with pid 1462411 00:07:36.104 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1462411 00:07:36.104 06:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1462411 00:07:38.000 00:07:38.000 real 0m12.133s 00:07:38.000 user 0m12.279s 00:07:38.000 sys 0m1.635s 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.000 ************************************ 00:07:38.000 END TEST locking_app_on_unlocked_coremask 00:07:38.000 ************************************ 00:07:38.000 06:56:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:38.000 06:56:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.000 06:56:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.000 06:56:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.000 ************************************ 00:07:38.000 START TEST locking_app_on_locked_coremask 00:07:38.000 ************************************ 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1464304 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1464304 /var/tmp/spdk.sock 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1464304 ']' 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.000 06:56:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.000 [2024-07-24 06:56:52.475723] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:38.000 [2024-07-24 06:56:52.475819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464304 ] 00:07:38.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.000 [2024-07-24 06:56:52.624050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.258 [2024-07-24 06:56:52.824655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1464578 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1464578 /var/tmp/spdk2.sock 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1464578 /var/tmp/spdk2.sock 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1464578 /var/tmp/spdk2.sock 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1464578 ']' 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.188 06:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.188 [2024-07-24 06:56:53.808113] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:39.188 [2024-07-24 06:56:53.808210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464578 ] 00:07:39.445 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.445 [2024-07-24 06:56:54.004201] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1464304 has claimed it. 00:07:39.445 [2024-07-24 06:56:54.004255] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:40.009 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1464578) - No such process 00:07:40.009 ERROR: process (pid: 1464578) is no longer running 00:07:40.009 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.009 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:40.009 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:40.009 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.009 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:40.009 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.009 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1464304 00:07:40.009 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1464304 00:07:40.009 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.268 lslocks: write error 00:07:40.268 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1464304 00:07:40.268 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1464304 ']' 00:07:40.268 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1464304 00:07:40.268 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:40.268 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.269 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1464304 00:07:40.269 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.269 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.269 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1464304' 00:07:40.269 killing process with pid 1464304 00:07:40.269 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1464304 00:07:40.269 06:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1464304 00:07:42.852 00:07:42.852 real 0m4.787s 00:07:42.852 user 0m4.835s 00:07:42.852 sys 0m0.937s 00:07:42.852 06:56:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.852 06:56:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.852 ************************************ 00:07:42.852 END TEST locking_app_on_locked_coremask 00:07:42.852 ************************************ 00:07:42.852 06:56:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:42.852 06:56:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.852 06:56:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.852 06:56:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.852 ************************************ 00:07:42.852 START TEST locking_overlapped_coremask 00:07:42.852 ************************************ 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1465146 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1465146 /var/tmp/spdk.sock 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1465146 ']' 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.852 06:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.852 [2024-07-24 06:56:57.346431] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:42.852 [2024-07-24 06:56:57.346519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465146 ] 00:07:42.852 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.111 [2024-07-24 06:56:57.489460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.111 [2024-07-24 06:56:57.679560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.111 [2024-07-24 06:56:57.679631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.111 [2024-07-24 06:56:57.679640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1465413 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1465413 /var/tmp/spdk2.sock 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1465413 /var/tmp/spdk2.sock 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1465413 /var/tmp/spdk2.sock 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1465413 ']' 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.048 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.049 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.049 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.049 06:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.049 [2024-07-24 06:56:58.642173] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:44.049 [2024-07-24 06:56:58.642268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465413 ] 00:07:44.308 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.308 [2024-07-24 06:56:58.848090] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1465146 has claimed it. 00:07:44.308 [2024-07-24 06:56:58.848147] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:44.876 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1465413) - No such process 00:07:44.876 ERROR: process (pid: 1465413) is no longer running 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1465146 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1465146 ']' 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1465146 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465146 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465146' 00:07:44.876 killing process with pid 1465146 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1465146 00:07:44.876 06:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1465146 00:07:47.413 00:07:47.413 real 0m4.433s 00:07:47.413 user 0m11.493s 00:07:47.413 sys 0m0.765s 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.413 ************************************ 00:07:47.413 END TEST locking_overlapped_coremask 00:07:47.413 ************************************ 00:07:47.413 06:57:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:47.413 06:57:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.413 06:57:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.413 06:57:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.413 ************************************ 00:07:47.413 START TEST locking_overlapped_coremask_via_rpc 00:07:47.413 ************************************ 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1466088 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1466088 /var/tmp/spdk.sock 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1466088 ']' 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.413 06:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.413 [2024-07-24 06:57:01.858103] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:47.413 [2024-07-24 06:57:01.858215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466088 ] 00:07:47.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.413 [2024-07-24 06:57:02.002509] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.413 [2024-07-24 06:57:02.002547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.673 [2024-07-24 06:57:02.204031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.673 [2024-07-24 06:57:02.204101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.673 [2024-07-24 06:57:02.204106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.608 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.608 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:48.608 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1466381 00:07:48.609 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1466381 /var/tmp/spdk2.sock 00:07:48.609 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1466381 ']' 00:07:48.609 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:48.609 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.609 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.609 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.609 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.609 06:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.609 [2024-07-24 06:57:03.202533] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:48.609 [2024-07-24 06:57:03.202643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466381 ] 00:07:48.875 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.875 [2024-07-24 06:57:03.406595] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:48.875 [2024-07-24 06:57:03.406645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.444 [2024-07-24 06:57:03.848912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.444 [2024-07-24 06:57:03.852688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.444 [2024-07-24 06:57:03.852711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.347 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.347 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:51.347 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:51.347 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.348 [2024-07-24 06:57:05.700749] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1466088 has claimed it. 00:07:51.348 request: 00:07:51.348 { 00:07:51.348 "method": "framework_enable_cpumask_locks", 00:07:51.348 "req_id": 1 00:07:51.348 } 00:07:51.348 Got JSON-RPC error response 00:07:51.348 response: 00:07:51.348 { 00:07:51.348 "code": -32603, 00:07:51.348 "message": "Failed to claim CPU core: 2" 00:07:51.348 } 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1466088 /var/tmp/spdk.sock 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1466088 ']' 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1466381 /var/tmp/spdk2.sock 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1466381 ']' 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.348 06:57:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.607 06:57:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.607 06:57:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:51.607 06:57:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:51.607 06:57:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:51.607 06:57:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:51.607 06:57:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:51.607 00:07:51.607 real 0m4.327s 00:07:51.607 user 0m0.997s 00:07:51.607 sys 0m0.241s 00:07:51.607 06:57:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.607 06:57:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.608 ************************************ 00:07:51.608 END TEST locking_overlapped_coremask_via_rpc 00:07:51.608 ************************************ 00:07:51.608 06:57:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:51.608 06:57:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1466088 ]] 00:07:51.608 06:57:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1466088 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1466088 ']' 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1466088 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1466088 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1466088' 00:07:51.608 killing process with pid 1466088 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1466088 00:07:51.608 06:57:06 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1466088 00:07:54.143 06:57:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1466381 ]] 00:07:54.143 06:57:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1466381 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1466381 ']' 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1466381 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1466381 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1466381' 00:07:54.143 killing process with pid 1466381 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1466381 00:07:54.143 06:57:08 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1466381 00:07:56.676 06:57:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:56.676 06:57:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:56.676 06:57:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1466088 ]] 00:07:56.676 06:57:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1466088 00:07:56.676 06:57:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1466088 ']' 00:07:56.676 06:57:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1466088 00:07:56.676 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1466088) - No such process 00:07:56.676 06:57:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1466088 is not found' 00:07:56.676 Process with pid 1466088 is not found 00:07:56.676 06:57:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1466381 ]] 00:07:56.676 06:57:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1466381 00:07:56.676 06:57:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1466381 ']' 00:07:56.676 06:57:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1466381 00:07:56.676 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1466381) - No such process 00:07:56.676 06:57:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1466381 is not found' 00:07:56.676 Process with pid 1466381 is not found 00:07:56.676 06:57:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:56.676 00:07:56.676 real 0m51.369s 00:07:56.676 user 1m24.662s 00:07:56.676 sys 0m7.977s 00:07:56.676 06:57:11 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.676 06:57:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.676 ************************************ 00:07:56.676 END TEST cpu_locks 00:07:56.676 ************************************ 00:07:56.676 00:07:56.676 real 1m20.664s 00:07:56.676 user 2m17.193s 00:07:56.676 sys 0m12.608s 00:07:56.676 06:57:11 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.676 06:57:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.676 ************************************ 00:07:56.676 END TEST event 00:07:56.676 ************************************ 00:07:56.676 06:57:11 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:56.676 06:57:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.676 06:57:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.676 06:57:11 -- common/autotest_common.sh@10 -- # set +x 00:07:56.676 ************************************ 00:07:56.676 START TEST thread 00:07:56.676 ************************************ 00:07:56.676 06:57:11 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:56.934 * Looking for test storage... 00:07:56.934 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:56.934 06:57:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:56.934 06:57:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:56.934 06:57:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.934 06:57:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.934 ************************************ 00:07:56.934 START TEST thread_poller_perf 00:07:56.934 ************************************ 00:07:56.934 06:57:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:56.934 [2024-07-24 06:57:11.454167] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:56.934 [2024-07-24 06:57:11.454252] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468238 ] 00:07:56.934 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.193 [2024-07-24 06:57:11.596776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.193 [2024-07-24 06:57:11.800076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.193 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:58.627 ====================================== 00:07:58.627 busy:2509411064 (cyc) 00:07:58.627 total_run_count: 420000 00:07:58.627 tsc_hz: 2500000000 (cyc) 00:07:58.627 ====================================== 00:07:58.627 poller_cost: 5974 (cyc), 2389 (nsec) 00:07:58.627 00:07:58.627 real 0m1.789s 00:07:58.627 user 0m1.603s 00:07:58.627 sys 0m0.179s 00:07:58.627 06:57:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.627 06:57:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 ************************************ 00:07:58.627 END TEST thread_poller_perf 00:07:58.627 ************************************ 00:07:58.627 06:57:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:58.627 06:57:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:58.627 06:57:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.627 06:57:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.886 ************************************ 00:07:58.886 START TEST thread_poller_perf 00:07:58.886 ************************************ 00:07:58.886 06:57:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:58.886 [2024-07-24 06:57:13.326802] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:58.886 [2024-07-24 06:57:13.326894] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468566 ] 00:07:58.886 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.886 [2024-07-24 06:57:13.469438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.145 [2024-07-24 06:57:13.672616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.145 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:00.522 ====================================== 00:08:00.522 busy:2503064476 (cyc) 00:08:00.522 total_run_count: 5535000 00:08:00.522 tsc_hz: 2500000000 (cyc) 00:08:00.522 ====================================== 00:08:00.522 poller_cost: 452 (cyc), 180 (nsec) 00:08:00.522 00:08:00.522 real 0m1.792s 00:08:00.522 user 0m1.615s 00:08:00.522 sys 0m0.170s 00:08:00.522 06:57:15 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.522 06:57:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:00.522 ************************************ 00:08:00.522 END TEST thread_poller_perf 00:08:00.522 ************************************ 00:08:00.522 06:57:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:00.522 00:08:00.522 real 0m3.849s 00:08:00.522 user 0m3.315s 00:08:00.522 sys 0m0.542s 00:08:00.522 06:57:15 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.522 06:57:15 thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.522 ************************************ 00:08:00.522 END TEST thread 00:08:00.522 ************************************ 00:08:00.781 06:57:15 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:08:00.781 06:57:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.781 06:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.781 06:57:15 -- common/autotest_common.sh@10 -- # set +x 00:08:00.781 ************************************ 00:08:00.781 START TEST accel 00:08:00.781 ************************************ 00:08:00.781 06:57:15 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:08:00.781 * Looking for test storage... 00:08:00.781 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:00.781 06:57:15 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:00.781 06:57:15 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:00.781 06:57:15 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:00.781 06:57:15 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1469113 00:08:00.781 06:57:15 accel -- accel/accel.sh@63 -- # waitforlisten 1469113 00:08:00.781 06:57:15 accel -- common/autotest_common.sh@829 -- # '[' -z 1469113 ']' 00:08:00.781 06:57:15 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.781 06:57:15 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.781 06:57:15 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:00.781 06:57:15 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:00.781 06:57:15 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.781 06:57:15 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.781 06:57:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.781 06:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.781 06:57:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.781 06:57:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.781 06:57:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.781 06:57:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.781 06:57:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:00.781 06:57:15 accel -- accel/accel.sh@41 -- # jq -r . 00:08:00.781 [2024-07-24 06:57:15.400972] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:00.781 [2024-07-24 06:57:15.401068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469113 ] 00:08:01.040 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.040 [2024-07-24 06:57:15.541259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.299 [2024-07-24 06:57:15.751649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.236 06:57:16 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.236 06:57:16 accel -- common/autotest_common.sh@862 -- # return 0 00:08:02.236 06:57:16 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:02.236 06:57:16 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:02.236 06:57:16 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:02.236 06:57:16 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:02.236 06:57:16 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:02.236 06:57:16 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:02.236 06:57:16 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.236 06:57:16 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:02.236 06:57:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.236 06:57:16 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.236 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.236 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.236 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.236 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.236 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.236 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.236 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.236 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.236 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.236 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.236 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.236 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.237 06:57:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.237 06:57:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.237 06:57:16 accel -- accel/accel.sh@75 -- # killprocess 1469113 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@948 -- # '[' -z 1469113 ']' 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@952 -- # kill -0 1469113 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@953 -- # uname 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469113 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469113' 00:08:02.237 killing process with pid 1469113 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@967 -- # kill 1469113 00:08:02.237 06:57:16 accel -- common/autotest_common.sh@972 -- # wait 1469113 00:08:04.773 06:57:19 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:04.773 06:57:19 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:04.773 06:57:19 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.773 06:57:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.773 06:57:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.773 06:57:19 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:08:04.773 06:57:19 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:04.773 06:57:19 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:04.773 06:57:19 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.773 06:57:19 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.773 06:57:19 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.773 06:57:19 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.773 06:57:19 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.773 06:57:19 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:04.773 06:57:19 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:04.773 06:57:19 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.773 06:57:19 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:04.773 06:57:19 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:04.773 06:57:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:04.773 06:57:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.773 06:57:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.773 ************************************ 00:08:04.773 START TEST accel_missing_filename 00:08:04.773 ************************************ 00:08:04.773 06:57:19 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:08:04.773 06:57:19 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:04.773 06:57:19 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:04.773 06:57:19 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:04.773 06:57:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.773 06:57:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:04.773 06:57:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.773 06:57:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:04.773 06:57:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:04.773 06:57:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:04.773 06:57:19 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.773 06:57:19 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.773 06:57:19 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.773 06:57:19 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.773 06:57:19 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.773 06:57:19 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:04.773 06:57:19 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:04.773 [2024-07-24 06:57:19.329889] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:04.773 [2024-07-24 06:57:19.329977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469691 ] 00:08:05.032 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.032 [2024-07-24 06:57:19.478578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.291 [2024-07-24 06:57:19.701719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.550 [2024-07-24 06:57:19.942324] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.119 [2024-07-24 06:57:20.458905] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:06.377 A filename is required. 00:08:06.377 06:57:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:06.377 06:57:20 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:06.378 06:57:20 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:06.378 06:57:20 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:06.378 06:57:20 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:06.378 06:57:20 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:06.378 00:08:06.378 real 0m1.581s 00:08:06.378 user 0m1.372s 00:08:06.378 sys 0m0.228s 00:08:06.378 06:57:20 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.378 06:57:20 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:06.378 ************************************ 00:08:06.378 END TEST accel_missing_filename 00:08:06.378 ************************************ 00:08:06.378 06:57:20 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:06.378 06:57:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:06.378 06:57:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.378 06:57:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.378 ************************************ 00:08:06.378 START TEST accel_compress_verify 00:08:06.378 ************************************ 00:08:06.378 06:57:20 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:06.378 06:57:20 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:06.378 06:57:20 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:06.378 06:57:20 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:06.378 06:57:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.378 06:57:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:06.378 06:57:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.378 06:57:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:06.378 06:57:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:06.378 06:57:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:06.378 06:57:20 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.378 06:57:20 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.378 06:57:20 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.378 06:57:20 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.378 06:57:20 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.378 06:57:20 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:06.378 06:57:20 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:06.378 [2024-07-24 06:57:20.978052] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:06.378 [2024-07-24 06:57:20.978151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469988 ] 00:08:06.637 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.637 [2024-07-24 06:57:21.122184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.896 [2024-07-24 06:57:21.325211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.155 [2024-07-24 06:57:21.548939] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.724 [2024-07-24 06:57:22.062091] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:07.984 00:08:07.984 Compression does not support the verify option, aborting. 00:08:07.984 06:57:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:07.984 06:57:22 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.984 06:57:22 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:07.984 06:57:22 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:07.984 06:57:22 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:07.984 06:57:22 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.984 00:08:07.984 real 0m1.533s 00:08:07.984 user 0m1.335s 00:08:07.984 sys 0m0.232s 00:08:07.984 06:57:22 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.984 06:57:22 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:07.984 ************************************ 00:08:07.984 END TEST accel_compress_verify 00:08:07.984 ************************************ 00:08:07.984 06:57:22 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:07.984 06:57:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:07.984 06:57:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.984 06:57:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.984 ************************************ 00:08:07.984 START TEST accel_wrong_workload 00:08:07.984 ************************************ 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:07.984 06:57:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:07.984 06:57:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:07.984 06:57:22 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.984 06:57:22 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.984 06:57:22 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.984 06:57:22 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.984 06:57:22 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.984 06:57:22 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:07.984 06:57:22 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:07.984 Unsupported workload type: foobar 00:08:07.984 [2024-07-24 06:57:22.575533] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:07.984 accel_perf options: 00:08:07.984 [-h help message] 00:08:07.984 [-q queue depth per core] 00:08:07.984 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:07.984 [-T number of threads per core 00:08:07.984 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:07.984 [-t time in seconds] 00:08:07.984 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:07.984 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:07.984 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:07.984 [-l for compress/decompress workloads, name of uncompressed input file 00:08:07.984 [-S for crc32c workload, use this seed value (default 0) 00:08:07.984 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:07.984 [-f for fill workload, use this BYTE value (default 255) 00:08:07.984 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:07.984 [-y verify result if this switch is on] 00:08:07.984 [-a tasks to allocate per core (default: same value as -q)] 00:08:07.984 Can be used to spread operations across a wider range of memory. 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.984 00:08:07.984 real 0m0.073s 00:08:07.984 user 0m0.060s 00:08:07.984 sys 0m0.048s 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.984 06:57:22 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:07.984 ************************************ 00:08:07.984 END TEST accel_wrong_workload 00:08:07.984 ************************************ 00:08:08.243 06:57:22 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:08.244 06:57:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:08.244 06:57:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.244 06:57:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.244 ************************************ 00:08:08.244 START TEST accel_negative_buffers 00:08:08.244 ************************************ 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:08.244 06:57:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:08.244 06:57:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:08.244 06:57:22 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.244 06:57:22 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.244 06:57:22 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.244 06:57:22 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.244 06:57:22 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.244 06:57:22 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:08.244 06:57:22 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:08.244 -x option must be non-negative. 00:08:08.244 [2024-07-24 06:57:22.724356] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:08.244 accel_perf options: 00:08:08.244 [-h help message] 00:08:08.244 [-q queue depth per core] 00:08:08.244 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:08.244 [-T number of threads per core 00:08:08.244 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:08.244 [-t time in seconds] 00:08:08.244 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:08.244 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:08.244 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:08.244 [-l for compress/decompress workloads, name of uncompressed input file 00:08:08.244 [-S for crc32c workload, use this seed value (default 0) 00:08:08.244 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:08.244 [-f for fill workload, use this BYTE value (default 255) 00:08:08.244 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:08.244 [-y verify result if this switch is on] 00:08:08.244 [-a tasks to allocate per core (default: same value as -q)] 00:08:08.244 Can be used to spread operations across a wider range of memory. 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.244 00:08:08.244 real 0m0.079s 00:08:08.244 user 0m0.076s 00:08:08.244 sys 0m0.042s 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.244 06:57:22 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:08.244 ************************************ 00:08:08.244 END TEST accel_negative_buffers 00:08:08.244 ************************************ 00:08:08.244 06:57:22 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:08.244 06:57:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:08.244 06:57:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.244 06:57:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.244 ************************************ 00:08:08.244 START TEST accel_crc32c 00:08:08.244 ************************************ 00:08:08.244 06:57:22 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:08.244 06:57:22 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:08.244 [2024-07-24 06:57:22.869723] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:08.244 [2024-07-24 06:57:22.869806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470456 ] 00:08:08.504 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.504 [2024-07-24 06:57:23.013548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.763 [2024-07-24 06:57:23.211823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.021 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.022 06:57:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:10.928 06:57:25 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.928 00:08:10.928 real 0m2.568s 00:08:10.928 user 0m0.009s 00:08:10.928 sys 0m0.003s 00:08:10.928 06:57:25 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.928 06:57:25 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:10.928 ************************************ 00:08:10.928 END TEST accel_crc32c 00:08:10.928 ************************************ 00:08:10.929 06:57:25 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:10.929 06:57:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:10.929 06:57:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.929 06:57:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.929 ************************************ 00:08:10.929 START TEST accel_crc32c_C2 00:08:10.929 ************************************ 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:10.929 06:57:25 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:10.929 [2024-07-24 06:57:25.513995] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:10.929 [2024-07-24 06:57:25.514082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470864 ] 00:08:11.188 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.188 [2024-07-24 06:57:25.661244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.448 [2024-07-24 06:57:25.864768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.707 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.708 06:57:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.647 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.648 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.648 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.648 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.648 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.648 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:13.648 06:57:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.648 00:08:13.648 real 0m2.563s 00:08:13.648 user 0m2.328s 00:08:13.648 sys 0m0.236s 00:08:13.648 06:57:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.648 06:57:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:13.648 ************************************ 00:08:13.648 END TEST accel_crc32c_C2 00:08:13.648 ************************************ 00:08:13.648 06:57:28 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:13.648 06:57:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:13.648 06:57:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.648 06:57:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.648 ************************************ 00:08:13.648 START TEST accel_copy 00:08:13.648 ************************************ 00:08:13.648 06:57:28 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:13.648 06:57:28 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:13.648 [2024-07-24 06:57:28.141917] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:13.648 [2024-07-24 06:57:28.142010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471405 ] 00:08:13.648 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.907 [2024-07-24 06:57:28.283535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.907 [2024-07-24 06:57:28.487776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.166 06:57:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:16.071 06:57:30 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.071 00:08:16.071 real 0m2.574s 00:08:16.071 user 0m2.358s 00:08:16.071 sys 0m0.217s 00:08:16.071 06:57:30 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.071 06:57:30 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.071 ************************************ 00:08:16.071 END TEST accel_copy 00:08:16.071 ************************************ 00:08:16.071 06:57:30 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.071 06:57:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:16.071 06:57:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.071 06:57:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.329 ************************************ 00:08:16.329 START TEST accel_fill 00:08:16.329 ************************************ 00:08:16.329 06:57:30 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:16.329 06:57:30 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:16.329 [2024-07-24 06:57:30.771413] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:16.329 [2024-07-24 06:57:30.771502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471924 ] 00:08:16.329 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.329 [2024-07-24 06:57:30.911984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.588 [2024-07-24 06:57:31.119907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.846 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.847 06:57:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:18.752 06:57:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.752 00:08:18.752 real 0m2.508s 00:08:18.752 user 0m2.296s 00:08:18.752 sys 0m0.215s 00:08:18.752 06:57:33 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.752 06:57:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:18.752 ************************************ 00:08:18.752 END TEST accel_fill 00:08:18.752 ************************************ 00:08:18.752 06:57:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:18.752 06:57:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:18.752 06:57:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.752 06:57:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.752 ************************************ 00:08:18.752 START TEST accel_copy_crc32c 00:08:18.752 ************************************ 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:18.752 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:18.752 [2024-07-24 06:57:33.363250] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:18.752 [2024-07-24 06:57:33.363352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472249 ] 00:08:19.011 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.011 [2024-07-24 06:57:33.510846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.269 [2024-07-24 06:57:33.715715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.528 06:57:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.434 00:08:21.434 real 0m2.583s 00:08:21.434 user 0m2.351s 00:08:21.434 sys 0m0.234s 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.434 06:57:35 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:21.434 ************************************ 00:08:21.434 END TEST accel_copy_crc32c 00:08:21.434 ************************************ 00:08:21.434 06:57:35 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:21.434 06:57:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:21.434 06:57:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.434 06:57:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.434 ************************************ 00:08:21.434 START TEST accel_copy_crc32c_C2 00:08:21.434 ************************************ 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:21.434 06:57:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:21.434 [2024-07-24 06:57:36.015739] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:21.434 [2024-07-24 06:57:36.015829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472799 ] 00:08:21.693 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.693 [2024-07-24 06:57:36.160053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.952 [2024-07-24 06:57:36.355267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.952 06:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.857 00:08:23.857 real 0m2.518s 00:08:23.857 user 0m0.008s 00:08:23.857 sys 0m0.004s 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.857 06:57:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:23.857 ************************************ 00:08:23.857 END TEST accel_copy_crc32c_C2 00:08:23.857 ************************************ 00:08:24.116 06:57:38 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:24.116 06:57:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:24.116 06:57:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.116 06:57:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.116 ************************************ 00:08:24.116 START TEST accel_dualcast 00:08:24.116 ************************************ 00:08:24.116 06:57:38 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:24.116 06:57:38 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:24.116 [2024-07-24 06:57:38.599948] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:24.116 [2024-07-24 06:57:38.600032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473340 ] 00:08:24.116 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.116 [2024-07-24 06:57:38.740972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.375 [2024-07-24 06:57:38.941738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.634 06:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:26.540 06:57:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.540 00:08:26.540 real 0m2.536s 00:08:26.540 user 0m2.306s 00:08:26.540 sys 0m0.232s 00:08:26.540 06:57:41 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.540 06:57:41 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:26.541 ************************************ 00:08:26.541 END TEST accel_dualcast 00:08:26.541 ************************************ 00:08:26.541 06:57:41 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:26.541 06:57:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:26.541 06:57:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.541 06:57:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.541 ************************************ 00:08:26.541 START TEST accel_compare 00:08:26.541 ************************************ 00:08:26.541 06:57:41 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:26.541 06:57:41 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:26.800 [2024-07-24 06:57:41.191174] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:26.800 [2024-07-24 06:57:41.191257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473656 ] 00:08:26.800 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.800 [2024-07-24 06:57:41.332376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.059 [2024-07-24 06:57:41.544894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 06:57:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:29.225 06:57:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.225 00:08:29.225 real 0m2.538s 00:08:29.225 user 0m2.335s 00:08:29.225 sys 0m0.204s 00:08:29.225 06:57:43 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.225 06:57:43 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:29.225 ************************************ 00:08:29.225 END TEST accel_compare 00:08:29.225 ************************************ 00:08:29.225 06:57:43 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:29.225 06:57:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:29.225 06:57:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.225 06:57:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.225 ************************************ 00:08:29.225 START TEST accel_xor 00:08:29.225 ************************************ 00:08:29.225 06:57:43 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:29.225 06:57:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:29.225 [2024-07-24 06:57:43.795470] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:29.225 [2024-07-24 06:57:43.795551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474188 ] 00:08:29.485 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.485 [2024-07-24 06:57:43.939549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.744 [2024-07-24 06:57:44.141675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:29.744 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.071 06:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.978 00:08:31.978 real 0m2.556s 00:08:31.978 user 0m0.008s 00:08:31.978 sys 0m0.003s 00:08:31.978 06:57:46 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.978 06:57:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:31.978 ************************************ 00:08:31.978 END TEST accel_xor 00:08:31.978 ************************************ 00:08:31.978 06:57:46 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:31.978 06:57:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:31.978 06:57:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.978 06:57:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.978 ************************************ 00:08:31.978 START TEST accel_xor 00:08:31.978 ************************************ 00:08:31.978 06:57:46 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:31.978 06:57:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:31.978 [2024-07-24 06:57:46.434617] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:31.978 [2024-07-24 06:57:46.434702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474736 ] 00:08:31.978 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.978 [2024-07-24 06:57:46.574869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.238 [2024-07-24 06:57:46.765898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.497 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.498 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.498 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.498 06:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.498 06:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.498 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.498 06:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.404 06:57:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:34.405 06:57:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.405 00:08:34.405 real 0m2.531s 00:08:34.405 user 0m2.308s 00:08:34.405 sys 0m0.225s 00:08:34.405 06:57:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.405 06:57:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:34.405 ************************************ 00:08:34.405 END TEST accel_xor 00:08:34.405 ************************************ 00:08:34.405 06:57:48 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:34.405 06:57:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:34.405 06:57:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.405 06:57:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.405 ************************************ 00:08:34.405 START TEST accel_dif_verify 00:08:34.405 ************************************ 00:08:34.405 06:57:48 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:34.405 06:57:48 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:34.664 [2024-07-24 06:57:49.038023] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:34.664 [2024-07-24 06:57:49.038124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475117 ] 00:08:34.664 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.664 [2024-07-24 06:57:49.180638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.924 [2024-07-24 06:57:49.384297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.183 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.183 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.183 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.183 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.183 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.183 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.184 06:57:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.103 06:57:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:37.104 06:57:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:37.104 06:57:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.104 00:08:37.104 real 0m2.544s 00:08:37.104 user 0m2.315s 00:08:37.104 sys 0m0.232s 00:08:37.104 06:57:51 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.104 06:57:51 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:37.104 ************************************ 00:08:37.104 END TEST accel_dif_verify 00:08:37.104 ************************************ 00:08:37.104 06:57:51 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:37.104 06:57:51 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:37.104 06:57:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.104 06:57:51 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.104 ************************************ 00:08:37.104 START TEST accel_dif_generate 00:08:37.104 ************************************ 00:08:37.104 06:57:51 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:37.104 06:57:51 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:37.104 [2024-07-24 06:57:51.639792] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:37.104 [2024-07-24 06:57:51.639877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475577 ] 00:08:37.104 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.363 [2024-07-24 06:57:51.778665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.363 [2024-07-24 06:57:51.982249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.622 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.622 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.622 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.622 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.622 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.622 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.623 06:57:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.530 06:57:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.789 06:57:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.789 06:57:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:39.789 06:57:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.789 00:08:39.789 real 0m2.564s 00:08:39.789 user 0m2.342s 00:08:39.789 sys 0m0.224s 00:08:39.789 06:57:54 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.789 06:57:54 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:39.789 ************************************ 00:08:39.789 END TEST accel_dif_generate 00:08:39.789 ************************************ 00:08:39.789 06:57:54 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:39.789 06:57:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:39.789 06:57:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.789 06:57:54 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.789 ************************************ 00:08:39.789 START TEST accel_dif_generate_copy 00:08:39.789 ************************************ 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.789 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:39.790 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:39.790 [2024-07-24 06:57:54.280668] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:39.790 [2024-07-24 06:57:54.280751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476129 ] 00:08:39.790 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.048 [2024-07-24 06:57:54.422190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.048 [2024-07-24 06:57:54.630289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.308 06:57:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:42.215 00:08:42.215 real 0m2.564s 00:08:42.215 user 0m0.010s 00:08:42.215 sys 0m0.002s 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.215 06:57:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:42.215 ************************************ 00:08:42.215 END TEST accel_dif_generate_copy 00:08:42.215 ************************************ 00:08:42.215 06:57:56 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:42.215 06:57:56 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:42.215 06:57:56 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:42.215 06:57:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.215 06:57:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:42.474 ************************************ 00:08:42.474 START TEST accel_comp 00:08:42.474 ************************************ 00:08:42.474 06:57:56 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:42.474 06:57:56 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:42.474 [2024-07-24 06:57:56.911066] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:42.474 [2024-07-24 06:57:56.911152] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476670 ] 00:08:42.474 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.474 [2024-07-24 06:57:57.054346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.733 [2024-07-24 06:57:57.253021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.992 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.993 06:57:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:44.898 06:57:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.898 00:08:44.898 real 0m2.556s 00:08:44.898 user 0m2.307s 00:08:44.898 sys 0m0.250s 00:08:44.898 06:57:59 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.898 06:57:59 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:44.898 ************************************ 00:08:44.898 END TEST accel_comp 00:08:44.898 ************************************ 00:08:44.898 06:57:59 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:44.898 06:57:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:44.898 06:57:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.898 06:57:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.898 ************************************ 00:08:44.898 START TEST accel_decomp 00:08:44.898 ************************************ 00:08:44.898 06:57:59 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:44.898 06:57:59 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:45.158 [2024-07-24 06:57:59.534875] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:45.158 [2024-07-24 06:57:59.534975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477028 ] 00:08:45.158 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.158 [2024-07-24 06:57:59.677558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.417 [2024-07-24 06:57:59.876219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:45.677 06:58:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.678 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.678 06:58:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:47.647 06:58:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:47.647 00:08:47.647 real 0m2.561s 00:08:47.647 user 0m2.335s 00:08:47.647 sys 0m0.228s 00:08:47.647 06:58:02 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.647 06:58:02 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:47.647 ************************************ 00:08:47.647 END TEST accel_decomp 00:08:47.647 ************************************ 00:08:47.647 06:58:02 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:47.647 06:58:02 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:47.647 06:58:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.647 06:58:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:47.647 ************************************ 00:08:47.647 START TEST accel_decomp_full 00:08:47.647 ************************************ 00:08:47.647 06:58:02 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:47.647 06:58:02 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:47.647 [2024-07-24 06:58:02.160961] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:47.647 [2024-07-24 06:58:02.161043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477516 ] 00:08:47.647 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.907 [2024-07-24 06:58:02.306712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.907 [2024-07-24 06:58:02.502955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.168 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.169 06:58:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:50.074 06:58:04 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:50.074 00:08:50.074 real 0m2.578s 00:08:50.074 user 0m0.008s 00:08:50.074 sys 0m0.004s 00:08:50.074 06:58:04 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.074 06:58:04 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:50.074 ************************************ 00:08:50.074 END TEST accel_decomp_full 00:08:50.074 ************************************ 00:08:50.333 06:58:04 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:50.333 06:58:04 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:50.333 06:58:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.333 06:58:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 ************************************ 00:08:50.333 START TEST accel_decomp_mcore 00:08:50.333 ************************************ 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:50.333 06:58:04 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:50.333 [2024-07-24 06:58:04.809826] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:50.333 [2024-07-24 06:58:04.809909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478063 ] 00:08:50.333 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.333 [2024-07-24 06:58:04.952042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.592 [2024-07-24 06:58:05.158290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.592 [2024-07-24 06:58:05.158365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.592 [2024-07-24 06:58:05.158423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.592 [2024-07-24 06:58:05.158448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.852 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:50.853 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.853 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.853 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:50.853 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:50.853 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:50.853 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:50.853 06:58:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:53.387 00:08:53.387 real 0m2.649s 00:08:53.387 user 0m7.923s 00:08:53.387 sys 0m0.257s 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.387 06:58:07 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:53.387 ************************************ 00:08:53.387 END TEST accel_decomp_mcore 00:08:53.387 ************************************ 00:08:53.387 06:58:07 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:53.387 06:58:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:53.387 06:58:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.387 06:58:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:53.387 ************************************ 00:08:53.387 START TEST accel_decomp_full_mcore 00:08:53.387 ************************************ 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:53.387 06:58:07 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:53.387 [2024-07-24 06:58:07.542464] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:53.387 [2024-07-24 06:58:07.542564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478524 ] 00:08:53.387 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.387 [2024-07-24 06:58:07.685849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.387 [2024-07-24 06:58:07.891368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.387 [2024-07-24 06:58:07.891441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.387 [2024-07-24 06:58:07.891514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.387 [2024-07-24 06:58:07.891521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:53.646 06:58:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.551 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:55.552 00:08:55.552 real 0m2.666s 00:08:55.552 user 0m7.964s 00:08:55.552 sys 0m0.246s 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.552 06:58:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:55.552 ************************************ 00:08:55.552 END TEST accel_decomp_full_mcore 00:08:55.552 ************************************ 00:08:55.873 06:58:10 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:55.873 06:58:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:55.873 06:58:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.873 06:58:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:55.873 ************************************ 00:08:55.873 START TEST accel_decomp_mthread 00:08:55.873 ************************************ 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:55.873 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:55.873 [2024-07-24 06:58:10.289468] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:55.874 [2024-07-24 06:58:10.289553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478918 ] 00:08:55.874 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.874 [2024-07-24 06:58:10.437196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.161 [2024-07-24 06:58:10.651614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:56.421 06:58:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:58.329 00:08:58.329 real 0m2.616s 00:08:58.329 user 0m2.387s 00:08:58.329 sys 0m0.248s 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.329 06:58:12 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:58.329 ************************************ 00:08:58.329 END TEST accel_decomp_mthread 00:08:58.329 ************************************ 00:08:58.329 06:58:12 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:58.329 06:58:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:58.329 06:58:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.329 06:58:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:58.329 ************************************ 00:08:58.329 START TEST accel_decomp_full_mthread 00:08:58.329 ************************************ 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:58.329 06:58:12 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:58.589 [2024-07-24 06:58:12.985126] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:58.589 [2024-07-24 06:58:12.985212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479469 ] 00:08:58.589 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.589 [2024-07-24 06:58:13.127260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.849 [2024-07-24 06:58:13.326914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.109 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.110 06:58:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.016 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:01.017 00:09:01.017 real 0m2.624s 00:09:01.017 user 0m2.414s 00:09:01.017 sys 0m0.227s 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.017 06:58:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:01.017 ************************************ 00:09:01.017 END TEST accel_decomp_full_mthread 00:09:01.017 ************************************ 00:09:01.017 06:58:15 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:01.017 06:58:15 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:01.017 06:58:15 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:01.017 06:58:15 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:01.017 06:58:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.017 06:58:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:01.017 06:58:15 accel -- common/autotest_common.sh@10 -- # set +x 00:09:01.017 06:58:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:01.017 06:58:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.017 06:58:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.017 06:58:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:01.017 06:58:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:01.017 06:58:15 accel -- accel/accel.sh@41 -- # jq -r . 00:09:01.017 ************************************ 00:09:01.017 START TEST accel_dif_functional_tests 00:09:01.017 ************************************ 00:09:01.017 06:58:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:01.276 [2024-07-24 06:58:15.720211] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:01.276 [2024-07-24 06:58:15.720288] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480017 ] 00:09:01.276 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.276 [2024-07-24 06:58:15.861254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:01.534 [2024-07-24 06:58:16.058051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.534 [2024-07-24 06:58:16.058116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.534 [2024-07-24 06:58:16.058124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.792 00:09:01.792 00:09:01.792 CUnit - A unit testing framework for C - Version 2.1-3 00:09:01.792 http://cunit.sourceforge.net/ 00:09:01.792 00:09:01.792 00:09:01.792 Suite: accel_dif 00:09:01.792 Test: verify: DIF generated, GUARD check ...passed 00:09:01.792 Test: verify: DIF generated, APPTAG check ...passed 00:09:01.792 Test: verify: DIF generated, REFTAG check ...passed 00:09:01.792 Test: verify: DIF not generated, GUARD check ...[2024-07-24 06:58:16.401333] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:01.792 passed 00:09:01.792 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 06:58:16.401422] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:01.792 passed 00:09:01.792 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 06:58:16.401462] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:01.792 passed 00:09:01.792 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:01.792 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 06:58:16.401544] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:01.792 passed 00:09:01.792 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:01.792 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:01.792 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:01.792 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 06:58:16.401708] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:01.792 passed 00:09:01.792 Test: verify copy: DIF generated, GUARD check ...passed 00:09:01.792 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:01.792 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:01.792 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 06:58:16.401898] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:01.792 passed 00:09:01.792 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 06:58:16.401950] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:01.792 passed 00:09:01.792 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 06:58:16.401998] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:01.792 passed 00:09:01.792 Test: generate copy: DIF generated, GUARD check ...passed 00:09:01.792 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:01.792 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:01.792 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:01.793 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:01.793 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:01.793 Test: generate copy: iovecs-len validate ...[2024-07-24 06:58:16.402308] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:01.793 passed 00:09:01.793 Test: generate copy: buffer alignment validate ...passed 00:09:01.793 00:09:01.793 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.793 suites 1 1 n/a 0 0 00:09:01.793 tests 26 26 26 0 0 00:09:01.793 asserts 115 115 115 0 n/a 00:09:01.793 00:09:01.793 Elapsed time = 0.003 seconds 00:09:03.171 00:09:03.171 real 0m1.986s 00:09:03.171 user 0m4.047s 00:09:03.171 sys 0m0.282s 00:09:03.171 06:58:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.171 06:58:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:03.171 ************************************ 00:09:03.171 END TEST accel_dif_functional_tests 00:09:03.171 ************************************ 00:09:03.171 00:09:03.171 real 1m2.470s 00:09:03.171 user 1m9.293s 00:09:03.171 sys 0m7.393s 00:09:03.171 06:58:17 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.171 06:58:17 accel -- common/autotest_common.sh@10 -- # set +x 00:09:03.171 ************************************ 00:09:03.171 END TEST accel 00:09:03.171 ************************************ 00:09:03.171 06:58:17 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:03.171 06:58:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:03.171 06:58:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.171 06:58:17 -- common/autotest_common.sh@10 -- # set +x 00:09:03.171 ************************************ 00:09:03.171 START TEST accel_rpc 00:09:03.171 ************************************ 00:09:03.171 06:58:17 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:03.430 * Looking for test storage... 00:09:03.430 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:09:03.430 06:58:17 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:03.430 06:58:17 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1480355 00:09:03.430 06:58:17 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1480355 00:09:03.430 06:58:17 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:03.430 06:58:17 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1480355 ']' 00:09:03.430 06:58:17 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.430 06:58:17 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.430 06:58:17 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.430 06:58:17 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.430 06:58:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 [2024-07-24 06:58:17.960971] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:03.430 [2024-07-24 06:58:17.961090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480355 ] 00:09:03.430 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.689 [2024-07-24 06:58:18.107997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.689 [2024-07-24 06:58:18.308282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.257 06:58:18 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.257 06:58:18 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:04.257 06:58:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:04.257 06:58:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:04.257 06:58:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:04.257 06:58:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:04.257 06:58:18 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:04.257 06:58:18 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.257 06:58:18 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.257 06:58:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.257 ************************************ 00:09:04.257 START TEST accel_assign_opcode 00:09:04.257 ************************************ 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:04.257 [2024-07-24 06:58:18.746078] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:04.257 [2024-07-24 06:58:18.754072] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.257 06:58:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:05.194 06:58:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.194 06:58:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:05.194 06:58:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:05.194 06:58:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.194 06:58:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:05.194 06:58:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:05.194 06:58:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.194 software 00:09:05.194 00:09:05.194 real 0m0.884s 00:09:05.194 user 0m0.045s 00:09:05.194 sys 0m0.014s 00:09:05.194 06:58:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.194 06:58:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:05.194 ************************************ 00:09:05.194 END TEST accel_assign_opcode 00:09:05.194 ************************************ 00:09:05.194 06:58:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1480355 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1480355 ']' 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1480355 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1480355 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1480355' 00:09:05.194 killing process with pid 1480355 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@967 -- # kill 1480355 00:09:05.194 06:58:19 accel_rpc -- common/autotest_common.sh@972 -- # wait 1480355 00:09:07.731 00:09:07.731 real 0m4.284s 00:09:07.731 user 0m4.161s 00:09:07.731 sys 0m0.626s 00:09:07.731 06:58:22 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.731 06:58:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.731 ************************************ 00:09:07.731 END TEST accel_rpc 00:09:07.731 ************************************ 00:09:07.731 06:58:22 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:09:07.731 06:58:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:07.731 06:58:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.731 06:58:22 -- common/autotest_common.sh@10 -- # set +x 00:09:07.731 ************************************ 00:09:07.731 START TEST app_cmdline 00:09:07.731 ************************************ 00:09:07.731 06:58:22 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:09:07.731 * Looking for test storage... 00:09:07.731 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:07.731 06:58:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:07.731 06:58:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1481221 00:09:07.731 06:58:22 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:07.731 06:58:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1481221 00:09:07.731 06:58:22 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1481221 ']' 00:09:07.731 06:58:22 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.731 06:58:22 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.731 06:58:22 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.731 06:58:22 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.731 06:58:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:07.731 [2024-07-24 06:58:22.330574] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:07.731 [2024-07-24 06:58:22.330678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481221 ] 00:09:07.990 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.990 [2024-07-24 06:58:22.476581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.249 [2024-07-24 06:58:22.679709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:09.185 { 00:09:09.185 "version": "SPDK v24.09-pre git sha1 78cbcfdde", 00:09:09.185 "fields": { 00:09:09.185 "major": 24, 00:09:09.185 "minor": 9, 00:09:09.185 "patch": 0, 00:09:09.185 "suffix": "-pre", 00:09:09.185 "commit": "78cbcfdde" 00:09:09.185 } 00:09:09.185 } 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:09.185 06:58:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:09.185 06:58:23 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:09.444 request: 00:09:09.444 { 00:09:09.444 "method": "env_dpdk_get_mem_stats", 00:09:09.444 "req_id": 1 00:09:09.444 } 00:09:09.444 Got JSON-RPC error response 00:09:09.444 response: 00:09:09.444 { 00:09:09.444 "code": -32601, 00:09:09.444 "message": "Method not found" 00:09:09.444 } 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:09.444 06:58:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1481221 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1481221 ']' 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1481221 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481221 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481221' 00:09:09.444 killing process with pid 1481221 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@967 -- # kill 1481221 00:09:09.444 06:58:23 app_cmdline -- common/autotest_common.sh@972 -- # wait 1481221 00:09:11.977 00:09:11.977 real 0m4.172s 00:09:11.977 user 0m4.274s 00:09:11.977 sys 0m0.650s 00:09:11.977 06:58:26 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.977 06:58:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:11.977 ************************************ 00:09:11.977 END TEST app_cmdline 00:09:11.977 ************************************ 00:09:11.977 06:58:26 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:09:11.977 06:58:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:11.977 06:58:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.977 06:58:26 -- common/autotest_common.sh@10 -- # set +x 00:09:11.977 ************************************ 00:09:11.977 START TEST version 00:09:11.977 ************************************ 00:09:11.977 06:58:26 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:09:11.977 * Looking for test storage... 00:09:11.977 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:11.977 06:58:26 version -- app/version.sh@17 -- # get_header_version major 00:09:11.977 06:58:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:09:11.977 06:58:26 version -- app/version.sh@14 -- # cut -f2 00:09:11.977 06:58:26 version -- app/version.sh@14 -- # tr -d '"' 00:09:11.977 06:58:26 version -- app/version.sh@17 -- # major=24 00:09:11.977 06:58:26 version -- app/version.sh@18 -- # get_header_version minor 00:09:11.977 06:58:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:09:11.977 06:58:26 version -- app/version.sh@14 -- # cut -f2 00:09:11.977 06:58:26 version -- app/version.sh@14 -- # tr -d '"' 00:09:11.977 06:58:26 version -- app/version.sh@18 -- # minor=9 00:09:11.977 06:58:26 version -- app/version.sh@19 -- # get_header_version patch 00:09:11.977 06:58:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:09:11.977 06:58:26 version -- app/version.sh@14 -- # cut -f2 00:09:11.977 06:58:26 version -- app/version.sh@14 -- # tr -d '"' 00:09:11.977 06:58:26 version -- app/version.sh@19 -- # patch=0 00:09:11.977 06:58:26 version -- app/version.sh@20 -- # get_header_version suffix 00:09:11.977 06:58:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:09:11.977 06:58:26 version -- app/version.sh@14 -- # cut -f2 00:09:11.977 06:58:26 version -- app/version.sh@14 -- # tr -d '"' 00:09:11.977 06:58:26 version -- app/version.sh@20 -- # suffix=-pre 00:09:11.977 06:58:26 version -- app/version.sh@22 -- # version=24.9 00:09:11.977 06:58:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:11.977 06:58:26 version -- app/version.sh@28 -- # version=24.9rc0 00:09:11.977 06:58:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:11.978 06:58:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:11.978 06:58:26 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:11.978 06:58:26 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:11.978 00:09:11.978 real 0m0.186s 00:09:11.978 user 0m0.089s 00:09:11.978 sys 0m0.146s 00:09:11.978 06:58:26 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.978 06:58:26 version -- common/autotest_common.sh@10 -- # set +x 00:09:11.978 ************************************ 00:09:11.978 END TEST version 00:09:11.978 ************************************ 00:09:11.978 06:58:26 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:09:11.978 06:58:26 -- spdk/autotest.sh@198 -- # uname -s 00:09:11.978 06:58:26 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:09:11.978 06:58:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:11.978 06:58:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:11.978 06:58:26 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:09:11.978 06:58:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:11.978 06:58:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:11.978 06:58:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.978 06:58:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.236 06:58:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:12.236 06:58:26 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:09:12.236 06:58:26 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:09:12.236 06:58:26 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:09:12.236 06:58:26 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:09:12.236 06:58:26 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:09:12.236 06:58:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:12.236 06:58:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.236 06:58:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.236 ************************************ 00:09:12.236 START TEST nvmf_rdma 00:09:12.236 ************************************ 00:09:12.236 06:58:26 nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:09:12.236 * Looking for test storage... 00:09:12.236 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:09:12.236 06:58:26 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:09:12.236 06:58:26 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:12.236 06:58:26 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:09:12.236 06:58:26 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:12.236 06:58:26 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.236 06:58:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:12.236 ************************************ 00:09:12.236 START TEST nvmf_target_core 00:09:12.236 ************************************ 00:09:12.236 06:58:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:09:12.495 * Looking for test storage... 00:09:12.495 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.495 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.496 06:58:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.496 ************************************ 00:09:12.496 START TEST nvmf_abort 00:09:12.496 ************************************ 00:09:12.496 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:12.496 * Looking for test storage... 00:09:12.496 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:12.496 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.755 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.756 06:58:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.880 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:20.881 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:20.881 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:20.881 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:20.881 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:20.881 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:21.141 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:21.141 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:21.141 altname enp217s0f0np0 00:09:21.141 altname ens818f0np0 00:09:21.141 inet 192.168.100.8/24 scope global mlx_0_0 00:09:21.141 valid_lft forever preferred_lft forever 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:21.141 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:21.141 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:21.141 altname enp217s0f1np1 00:09:21.141 altname ens818f1np1 00:09:21.141 inet 192.168.100.9/24 scope global mlx_0_1 00:09:21.141 valid_lft forever preferred_lft forever 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:21.141 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:21.142 192.168.100.9' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:21.142 192.168.100.9' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:21.142 192.168.100.9' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1486337 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1486337 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1486337 ']' 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.142 06:58:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.401 [2024-07-24 06:58:35.809687] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:21.401 [2024-07-24 06:58:35.809779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.401 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.401 [2024-07-24 06:58:35.959159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:21.661 [2024-07-24 06:58:36.167935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.661 [2024-07-24 06:58:36.167980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.661 [2024-07-24 06:58:36.167997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.661 [2024-07-24 06:58:36.168008] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.661 [2024-07-24 06:58:36.168019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.661 [2024-07-24 06:58:36.168150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.661 [2024-07-24 06:58:36.168228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.661 [2024-07-24 06:58:36.168239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.230 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.230 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:22.230 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:22.230 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.230 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.230 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.230 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:09:22.230 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.230 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.230 [2024-07-24 06:58:36.684655] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fda6df86940) succeed. 00:09:22.230 [2024-07-24 06:58:36.700825] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fda6df41940) succeed. 00:09:22.489 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.489 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:22.489 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.489 06:58:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.489 Malloc0 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.489 Delay0 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.489 [2024-07-24 06:58:37.070638] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.489 06:58:37 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:22.748 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.748 [2024-07-24 06:58:37.216885] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:25.322 Initializing NVMe Controllers 00:09:25.322 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:25.322 controller IO queue size 128 less than required 00:09:25.322 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:25.322 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:25.322 Initialization complete. Launching workers. 00:09:25.322 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 44999 00:09:25.322 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 45060, failed to submit 62 00:09:25.322 success 45003, unsuccess 57, failed 0 00:09:25.322 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:25.322 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.322 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:25.322 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.322 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:25.323 rmmod nvme_rdma 00:09:25.323 rmmod nvme_fabrics 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1486337 ']' 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1486337 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1486337 ']' 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1486337 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1486337 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1486337' 00:09:25.323 killing process with pid 1486337 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1486337 00:09:25.323 06:58:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1486337 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:27.227 00:09:27.227 real 0m14.371s 00:09:27.227 user 0m18.829s 00:09:27.227 sys 0m7.385s 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:27.227 ************************************ 00:09:27.227 END TEST nvmf_abort 00:09:27.227 ************************************ 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.227 ************************************ 00:09:27.227 START TEST nvmf_ns_hotplug_stress 00:09:27.227 ************************************ 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:27.227 * Looking for test storage... 00:09:27.227 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.227 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.228 06:58:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.355 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:35.356 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:35.356 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:35.356 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:35.356 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:35.356 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:35.357 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.357 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:35.357 altname enp217s0f0np0 00:09:35.357 altname ens818f0np0 00:09:35.357 inet 192.168.100.8/24 scope global mlx_0_0 00:09:35.357 valid_lft forever preferred_lft forever 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:35.357 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.357 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:35.357 altname enp217s0f1np1 00:09:35.357 altname ens818f1np1 00:09:35.357 inet 192.168.100.9/24 scope global mlx_0_1 00:09:35.357 valid_lft forever preferred_lft forever 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:35.357 192.168.100.9' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:35.357 192.168.100.9' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:35.357 192.168.100.9' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1491335 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1491335 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1491335 ']' 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.357 06:58:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:35.357 [2024-07-24 06:58:49.872526] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:35.357 [2024-07-24 06:58:49.872623] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.357 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.617 [2024-07-24 06:58:50.021380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.617 [2024-07-24 06:58:50.232086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.617 [2024-07-24 06:58:50.232125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.617 [2024-07-24 06:58:50.232144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.617 [2024-07-24 06:58:50.232155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.617 [2024-07-24 06:58:50.232167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.617 [2024-07-24 06:58:50.232266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.617 [2024-07-24 06:58:50.232386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.617 [2024-07-24 06:58:50.232398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.185 06:58:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.185 06:58:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:36.185 06:58:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.185 06:58:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:36.185 06:58:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.185 06:58:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.185 06:58:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:36.185 06:58:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:36.445 [2024-07-24 06:58:50.870198] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f51a88bf940) succeed. 00:09:36.445 [2024-07-24 06:58:50.880141] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f51a8879940) succeed. 00:09:36.705 06:58:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:36.705 06:58:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:36.965 [2024-07-24 06:58:51.474515] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:36.965 06:58:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:37.224 06:58:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:37.483 Malloc0 00:09:37.483 06:58:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:37.483 Delay0 00:09:37.483 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.742 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:38.001 NULL1 00:09:38.001 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:38.001 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1491782 00:09:38.001 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:38.001 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:38.001 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.261 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.261 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.520 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:38.520 06:58:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:38.520 true 00:09:38.779 06:58:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:38.779 06:58:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.779 06:58:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.037 06:58:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:39.037 06:58:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:39.295 true 00:09:39.295 06:58:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:39.295 06:58:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.295 06:58:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.554 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:39.554 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:39.813 true 00:09:39.813 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:39.813 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.072 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.072 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:40.072 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:40.331 true 00:09:40.331 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:40.331 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.590 06:58:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.590 06:58:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:40.590 06:58:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:40.849 true 00:09:40.849 06:58:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:40.849 06:58:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.108 06:58:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.108 06:58:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:41.108 06:58:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:41.368 true 00:09:41.368 06:58:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:41.368 06:58:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.626 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.626 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:41.626 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:41.884 true 00:09:41.884 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:41.884 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.143 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.143 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:42.143 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:42.402 true 00:09:42.402 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:42.402 06:58:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.663 06:58:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.922 06:58:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:42.922 06:58:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:42.922 true 00:09:42.922 06:58:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:42.922 06:58:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.181 06:58:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.440 06:58:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:43.440 06:58:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:43.440 true 00:09:43.440 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:43.440 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.699 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.958 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:43.958 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:43.958 true 00:09:43.958 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:43.958 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.216 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.476 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:44.476 06:58:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:44.476 true 00:09:44.734 06:58:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:44.734 06:58:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.734 06:58:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.993 06:58:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:44.993 06:58:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:45.252 true 00:09:45.252 06:58:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:45.252 06:58:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.252 06:58:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.511 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:45.511 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:45.769 true 00:09:45.770 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:45.770 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.770 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.028 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:46.028 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:46.286 true 00:09:46.286 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:46.286 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.545 06:59:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.545 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:46.545 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:46.803 true 00:09:46.803 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:46.803 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.061 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.061 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:47.061 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:47.319 true 00:09:47.319 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:47.319 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.578 06:59:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.578 06:59:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:47.578 06:59:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:47.837 true 00:09:47.837 06:59:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:47.837 06:59:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.096 06:59:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.096 06:59:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:48.096 06:59:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:48.354 true 00:09:48.355 06:59:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:48.355 06:59:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.614 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.614 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:48.614 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:48.876 true 00:09:48.876 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:48.876 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.135 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.393 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:49.393 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:49.393 true 00:09:49.393 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:49.393 06:59:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.651 06:59:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.910 06:59:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:49.910 06:59:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:49.910 true 00:09:49.910 06:59:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:49.910 06:59:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.169 06:59:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.428 06:59:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:50.428 06:59:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:50.428 true 00:09:50.428 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:50.428 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.687 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.946 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:50.946 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:50.946 true 00:09:51.205 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:51.206 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.206 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.465 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:51.465 06:59:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:51.465 true 00:09:51.465 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:51.465 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.724 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.983 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:51.983 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:51.983 true 00:09:52.243 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:52.243 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.243 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.502 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:52.502 06:59:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:52.762 true 00:09:52.762 06:59:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:52.762 06:59:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.762 06:59:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.021 06:59:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:53.021 06:59:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:53.280 true 00:09:53.280 06:59:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:53.280 06:59:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.280 06:59:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.539 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:53.539 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:53.798 true 00:09:53.798 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:53.798 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.057 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.057 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:54.057 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:54.316 true 00:09:54.317 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:54.317 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.575 06:59:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.575 06:59:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:54.575 06:59:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:54.834 true 00:09:54.834 06:59:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:54.834 06:59:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.093 06:59:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.093 06:59:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:55.093 06:59:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:55.352 true 00:09:55.352 06:59:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:55.352 06:59:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.611 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.870 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:55.870 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:55.870 true 00:09:55.870 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:55.870 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.129 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.388 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:56.388 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:56.388 true 00:09:56.388 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:56.388 06:59:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.646 06:59:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.905 06:59:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:56.905 06:59:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:56.905 true 00:09:57.199 06:59:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:57.199 06:59:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.199 06:59:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.458 06:59:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:57.458 06:59:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:57.458 true 00:09:57.458 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:57.717 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.717 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.977 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:57.977 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:57.977 true 00:09:58.237 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:58.237 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.237 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.496 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:58.496 06:59:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:58.755 true 00:09:58.755 06:59:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:58.755 06:59:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.756 06:59:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.014 06:59:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:59.014 06:59:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:59.273 true 00:09:59.273 06:59:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:59.273 06:59:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.273 06:59:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.532 06:59:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:59.532 06:59:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:59.791 true 00:09:59.791 06:59:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:09:59.791 06:59:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.051 06:59:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.051 06:59:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:00.051 06:59:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:00.310 true 00:10:00.310 06:59:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:00.310 06:59:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.569 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.569 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:00.569 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:00.828 true 00:10:00.828 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:00.828 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.087 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.346 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:01.346 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:01.346 true 00:10:01.346 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:01.346 06:59:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.605 06:59:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.864 06:59:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:01.864 06:59:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:01.864 true 00:10:01.864 06:59:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:01.864 06:59:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.122 06:59:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.380 06:59:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:02.380 06:59:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:02.380 true 00:10:02.380 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:02.639 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.639 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.897 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:02.897 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:03.156 true 00:10:03.156 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:03.156 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.156 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.414 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:03.414 06:59:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:03.672 true 00:10:03.672 06:59:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:03.672 06:59:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.931 06:59:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.931 06:59:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:03.931 06:59:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:04.189 true 00:10:04.189 06:59:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:04.189 06:59:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.447 06:59:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.447 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:04.447 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:04.706 true 00:10:04.706 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:04.706 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.963 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.963 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:04.963 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:05.220 true 00:10:05.220 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:05.220 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.479 06:59:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.736 06:59:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:05.736 06:59:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:05.736 true 00:10:05.736 06:59:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:05.736 06:59:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.994 06:59:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.252 06:59:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:06.252 06:59:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:06.252 true 00:10:06.252 06:59:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:06.252 06:59:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.525 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.783 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:06.783 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:06.783 true 00:10:06.783 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:06.783 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.041 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.299 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:07.299 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:07.299 true 00:10:07.557 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:07.557 06:59:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.557 06:59:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.815 06:59:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:07.815 06:59:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:08.073 true 00:10:08.073 06:59:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:08.073 06:59:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.073 06:59:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.330 06:59:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:08.330 06:59:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:08.587 true 00:10:08.587 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:08.587 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.846 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.846 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:10:08.846 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:09.104 true 00:10:09.104 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:09.104 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.362 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.362 Initializing NVMe Controllers 00:10:09.362 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:09.362 Controller IO queue size 128, less than required. 00:10:09.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:09.362 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:09.362 Initialization complete. Launching workers. 00:10:09.362 ======================================================== 00:10:09.362 Latency(us) 00:10:09.362 Device Information : IOPS MiB/s Average min max 00:10:09.362 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36376.23 17.76 3518.73 1784.97 4119.51 00:10:09.362 ======================================================== 00:10:09.362 Total : 36376.23 17.76 3518.73 1784.97 4119.51 00:10:09.362 00:10:09.362 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:10:09.362 06:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:09.620 true 00:10:09.620 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1491782 00:10:09.620 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1491782) - No such process 00:10:09.620 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1491782 00:10:09.620 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.878 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.137 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:10.137 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:10.137 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:10.137 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:10.137 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:10.137 null0 00:10:10.137 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:10.137 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:10.137 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:10.395 null1 00:10:10.395 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:10.395 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:10.395 06:59:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:10.672 null2 00:10:10.672 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:10.672 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:10.672 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:10.672 null3 00:10:10.672 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:10.672 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:10.672 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:10.931 null4 00:10:10.931 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:10.931 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:10.931 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:11.190 null5 00:10:11.190 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:11.190 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:11.190 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:11.190 null6 00:10:11.190 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:11.190 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:11.190 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:11.449 null7 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:11.449 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1497660 1497661 1497663 1497666 1497667 1497669 1497671 1497673 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.450 06:59:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:11.709 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.709 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.709 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:11.709 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:11.709 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:11.709 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:11.709 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:11.709 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.968 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:11.969 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.228 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:12.488 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.488 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:12.488 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:12.488 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:12.488 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:12.488 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:12.488 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:12.488 06:59:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:12.488 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.488 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.488 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:12.488 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.488 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.488 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:12.488 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.488 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.488 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:12.489 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.489 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.489 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.489 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:12.489 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.489 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:12.489 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.489 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.489 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:12.748 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.008 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.268 06:59:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:13.528 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:13.528 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.528 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:13.528 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:13.528 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:13.528 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:13.528 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:13.528 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.787 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.047 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:14.306 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.306 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:14.306 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:14.306 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:14.306 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:14.306 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:14.306 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:14.306 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:14.565 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.565 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.565 06:59:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:14.565 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.824 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:15.084 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:15.084 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:15.084 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:15.084 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:15.084 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:15.084 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:15.084 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.084 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:15.344 rmmod nvme_rdma 00:10:15.344 rmmod nvme_fabrics 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1491335 ']' 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1491335 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1491335 ']' 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1491335 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1491335 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1491335' 00:10:15.344 killing process with pid 1491335 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1491335 00:10:15.344 06:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1491335 00:10:17.252 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.252 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:17.252 00:10:17.252 real 0m50.221s 00:10:17.252 user 3m26.391s 00:10:17.252 sys 0m18.876s 00:10:17.252 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.252 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.252 ************************************ 00:10:17.252 END TEST nvmf_ns_hotplug_stress 00:10:17.252 ************************************ 00:10:17.252 06:59:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:17.252 06:59:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:17.252 06:59:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.252 06:59:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.252 ************************************ 00:10:17.252 START TEST nvmf_delete_subsystem 00:10:17.252 ************************************ 00:10:17.252 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:17.512 * Looking for test storage... 00:10:17.512 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.512 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:17.513 06:59:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:27.501 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:27.501 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:27.501 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:27.502 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:27.502 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:27.502 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:27.502 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:27.502 altname enp217s0f0np0 00:10:27.502 altname ens818f0np0 00:10:27.502 inet 192.168.100.8/24 scope global mlx_0_0 00:10:27.502 valid_lft forever preferred_lft forever 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:27.502 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:27.502 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:27.502 altname enp217s0f1np1 00:10:27.502 altname ens818f1np1 00:10:27.502 inet 192.168.100.9/24 scope global mlx_0_1 00:10:27.502 valid_lft forever preferred_lft forever 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:27.502 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:27.503 192.168.100.9' 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:27.503 192.168.100.9' 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:27.503 192.168.100.9' 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1503000 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1503000 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1503000 ']' 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.503 06:59:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.503 [2024-07-24 06:59:40.639300] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:27.503 [2024-07-24 06:59:40.639394] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.503 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.503 [2024-07-24 06:59:40.787670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:27.503 [2024-07-24 06:59:41.008242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.503 [2024-07-24 06:59:41.008300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.503 [2024-07-24 06:59:41.008317] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.503 [2024-07-24 06:59:41.008328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.503 [2024-07-24 06:59:41.008339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.503 [2024-07-24 06:59:41.008446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.503 [2024-07-24 06:59:41.008460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.503 [2024-07-24 06:59:41.473686] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f95d6917940) succeed. 00:10:27.503 [2024-07-24 06:59:41.483048] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f95d68d2940) succeed. 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.503 [2024-07-24 06:59:41.664167] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.503 NULL1 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.503 Delay0 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1503089 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:27.503 06:59:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:27.503 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.503 [2024-07-24 06:59:41.811796] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:29.407 06:59:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.407 06:59:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.407 06:59:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.343 NVMe io qpair process completion error 00:10:30.343 NVMe io qpair process completion error 00:10:30.343 NVMe io qpair process completion error 00:10:30.343 NVMe io qpair process completion error 00:10:30.343 NVMe io qpair process completion error 00:10:30.343 NVMe io qpair process completion error 00:10:30.343 06:59:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.343 06:59:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:30.343 06:59:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1503089 00:10:30.343 06:59:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:30.911 06:59:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:30.911 06:59:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1503089 00:10:30.911 06:59:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Write completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.480 starting I/O failed: -6 00:10:31.480 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 starting I/O failed: -6 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Write completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Read completed with error (sct=0, sc=8) 00:10:31.481 Initializing NVMe Controllers 00:10:31.481 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:31.481 Controller IO queue size 128, less than required. 00:10:31.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:31.481 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:31.481 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:31.481 Initialization complete. Launching workers. 00:10:31.481 ======================================================== 00:10:31.481 Latency(us) 00:10:31.481 Device Information : IOPS MiB/s Average min max 00:10:31.481 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.46 0.04 1594414.57 1000194.94 2977720.72 00:10:31.481 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.46 0.04 1596543.05 1001746.42 2979771.77 00:10:31.481 ======================================================== 00:10:31.481 Total : 160.92 0.08 1595478.81 1000194.94 2979771.77 00:10:31.481 00:10:31.481 06:59:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:31.481 06:59:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1503089 00:10:31.481 06:59:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:31.481 [2024-07-24 06:59:45.946361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:31.482 [2024-07-24 06:59:45.946428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:31.482 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1503089 00:10:32.050 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1503089) - No such process 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1503089 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1503089 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1503089 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.050 [2024-07-24 06:59:46.454691] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1503931 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:32.050 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:32.050 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.050 [2024-07-24 06:59:46.586939] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:32.624 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:32.624 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:32.624 06:59:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:32.940 06:59:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:32.940 06:59:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:32.940 06:59:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:33.508 06:59:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:33.508 06:59:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:33.509 06:59:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:34.075 06:59:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:34.075 06:59:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:34.075 06:59:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:34.643 06:59:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:34.643 06:59:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:34.643 06:59:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:34.902 06:59:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:34.902 06:59:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:34.902 06:59:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:35.469 06:59:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:35.469 06:59:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:35.469 06:59:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:36.036 06:59:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:36.036 06:59:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:36.036 06:59:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:36.605 06:59:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:36.605 06:59:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:36.605 06:59:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:37.172 06:59:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.172 06:59:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:37.172 06:59:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:37.431 06:59:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.431 06:59:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:37.431 06:59:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:37.999 06:59:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.999 06:59:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:37.999 06:59:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:38.566 06:59:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:38.566 06:59:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:38.566 06:59:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:39.132 06:59:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:39.132 06:59:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:39.132 06:59:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:39.132 Initializing NVMe Controllers 00:10:39.132 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:39.132 Controller IO queue size 128, less than required. 00:10:39.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:39.132 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:39.132 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:39.132 Initialization complete. Launching workers. 00:10:39.132 ======================================================== 00:10:39.132 Latency(us) 00:10:39.132 Device Information : IOPS MiB/s Average min max 00:10:39.132 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001720.97 1000075.03 1005011.00 00:10:39.132 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002584.65 1000102.79 1006542.41 00:10:39.132 ======================================================== 00:10:39.132 Total : 256.00 0.12 1002152.81 1000075.03 1006542.41 00:10:39.132 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1503931 00:10:39.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1503931) - No such process 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1503931 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:39.699 rmmod nvme_rdma 00:10:39.699 rmmod nvme_fabrics 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1503000 ']' 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1503000 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1503000 ']' 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1503000 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1503000 00:10:39.699 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:39.700 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:39.700 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1503000' 00:10:39.700 killing process with pid 1503000 00:10:39.700 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1503000 00:10:39.700 06:59:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1503000 00:10:41.077 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.077 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:41.077 00:10:41.077 real 0m23.917s 00:10:41.077 user 0m52.472s 00:10:41.077 sys 0m8.078s 00:10:41.077 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.077 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:41.077 ************************************ 00:10:41.077 END TEST nvmf_delete_subsystem 00:10:41.077 ************************************ 00:10:41.336 06:59:55 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:41.336 06:59:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:41.336 06:59:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.336 06:59:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.336 ************************************ 00:10:41.336 START TEST nvmf_host_management 00:10:41.336 ************************************ 00:10:41.336 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:41.336 * Looking for test storage... 00:10:41.337 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:10:41.337 06:59:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:51.325 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:51.325 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:51.325 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:51.325 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.325 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:51.326 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:51.326 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:51.326 altname enp217s0f0np0 00:10:51.326 altname ens818f0np0 00:10:51.326 inet 192.168.100.8/24 scope global mlx_0_0 00:10:51.326 valid_lft forever preferred_lft forever 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:51.326 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:51.326 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:51.326 altname enp217s0f1np1 00:10:51.326 altname ens818f1np1 00:10:51.326 inet 192.168.100.9/24 scope global mlx_0_1 00:10:51.326 valid_lft forever preferred_lft forever 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:51.326 192.168.100.9' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:51.326 192.168.100.9' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:51.326 192.168.100.9' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1509784 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1509784 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1509784 ']' 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.326 07:00:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.326 [2024-07-24 07:00:04.614751] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:51.326 [2024-07-24 07:00:04.614854] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.326 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.326 [2024-07-24 07:00:04.764811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.326 [2024-07-24 07:00:04.978337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.326 [2024-07-24 07:00:04.978385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.326 [2024-07-24 07:00:04.978399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.326 [2024-07-24 07:00:04.978410] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.326 [2024-07-24 07:00:04.978422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.326 [2024-07-24 07:00:04.978557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.326 [2024-07-24 07:00:04.978658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.326 [2024-07-24 07:00:04.978741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.326 [2024-07-24 07:00:04.978768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.326 [2024-07-24 07:00:05.475548] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f753c652940) succeed. 00:10:51.326 [2024-07-24 07:00:05.484886] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f753c60e940) succeed. 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.326 Malloc0 00:10:51.326 [2024-07-24 07:00:05.936986] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:51.326 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1510093 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1510093 /var/tmp/bdevperf.sock 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1510093 ']' 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:51.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:51.585 07:00:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:51.585 { 00:10:51.585 "params": { 00:10:51.585 "name": "Nvme$subsystem", 00:10:51.585 "trtype": "$TEST_TRANSPORT", 00:10:51.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.585 "adrfam": "ipv4", 00:10:51.585 "trsvcid": "$NVMF_PORT", 00:10:51.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.585 "hdgst": ${hdgst:-false}, 00:10:51.585 "ddgst": ${ddgst:-false} 00:10:51.585 }, 00:10:51.585 "method": "bdev_nvme_attach_controller" 00:10:51.585 } 00:10:51.585 EOF 00:10:51.585 )") 00:10:51.585 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:51.585 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:51.585 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:51.585 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:51.585 "params": { 00:10:51.585 "name": "Nvme0", 00:10:51.585 "trtype": "rdma", 00:10:51.585 "traddr": "192.168.100.8", 00:10:51.585 "adrfam": "ipv4", 00:10:51.585 "trsvcid": "4420", 00:10:51.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:51.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:51.585 "hdgst": false, 00:10:51.585 "ddgst": false 00:10:51.585 }, 00:10:51.585 "method": "bdev_nvme_attach_controller" 00:10:51.585 }' 00:10:51.585 [2024-07-24 07:00:06.075583] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:51.585 [2024-07-24 07:00:06.075690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510093 ] 00:10:51.585 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.843 [2024-07-24 07:00:06.220095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.843 [2024-07-24 07:00:06.444891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.412 Running I/O for 10 seconds... 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.412 07:00:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.412 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.412 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=176 00:10:52.412 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 176 -ge 100 ']' 00:10:52.412 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:52.412 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:52.412 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:52.412 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:52.412 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.412 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.671 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.671 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:52.671 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.671 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.671 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.671 07:00:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:53.606 [2024-07-24 07:00:08.058664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191cfe40 len:0x10000 key:0x182300 00:10:53.606 [2024-07-24 07:00:08.058731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.606 [2024-07-24 07:00:08.058770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191bfd80 len:0x10000 key:0x182300 00:10:53.606 [2024-07-24 07:00:08.058784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.606 [2024-07-24 07:00:08.058800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191afcc0 len:0x10000 key:0x182300 00:10:53.606 [2024-07-24 07:00:08.058816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.606 [2024-07-24 07:00:08.058831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001919fc00 len:0x10000 key:0x182300 00:10:53.606 [2024-07-24 07:00:08.058844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.606 [2024-07-24 07:00:08.058859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001918fb40 len:0x10000 key:0x182300 00:10:53.606 [2024-07-24 07:00:08.058871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.606 [2024-07-24 07:00:08.058885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001917fa80 len:0x10000 key:0x182300 00:10:53.606 [2024-07-24 07:00:08.058897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.058911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001916f9c0 len:0x10000 key:0x182300 00:10:53.607 [2024-07-24 07:00:08.058923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.058938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001915f900 len:0x10000 key:0x182300 00:10:53.607 [2024-07-24 07:00:08.058950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.058963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001914f840 len:0x10000 key:0x182300 00:10:53.607 [2024-07-24 07:00:08.058975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.058989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001913f780 len:0x10000 key:0x182300 00:10:53.607 [2024-07-24 07:00:08.059002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001912f6c0 len:0x10000 key:0x182300 00:10:53.607 [2024-07-24 07:00:08.059028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001911f600 len:0x10000 key:0x182300 00:10:53.607 [2024-07-24 07:00:08.059054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001910f540 len:0x10000 key:0x182300 00:10:53.607 [2024-07-24 07:00:08.059080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b280 len:0x10000 key:0x181b00 00:10:53.607 [2024-07-24 07:00:08.059106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3b1c0 len:0x10000 key:0x181b00 00:10:53.607 [2024-07-24 07:00:08.059134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2b100 len:0x10000 key:0x181b00 00:10:53.607 [2024-07-24 07:00:08.059159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1b040 len:0x10000 key:0x181b00 00:10:53.607 [2024-07-24 07:00:08.059185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0af80 len:0x10000 key:0x181b00 00:10:53.607 [2024-07-24 07:00:08.059210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efd00 len:0x10000 key:0x182500 00:10:53.607 [2024-07-24 07:00:08.059236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4bd000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d49c000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d47b000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d45a000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d439000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d418000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3f7000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3d6000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3b5000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d394000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d373000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d352000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d331000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d310000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2ef000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ee000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6cd000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ac000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d68b000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d66a000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d649000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d628000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d607000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5e6000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5c5000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5a4000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d583000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.059975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d562000 len:0x10000 key:0x182100 00:10:53.607 [2024-07-24 07:00:08.059996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.060010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfc40 len:0x10000 key:0x182500 00:10:53.607 [2024-07-24 07:00:08.060024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.060039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfb80 len:0x10000 key:0x182500 00:10:53.607 [2024-07-24 07:00:08.060054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.060070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfac0 len:0x10000 key:0x182500 00:10:53.607 [2024-07-24 07:00:08.060082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.060096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afa00 len:0x10000 key:0x182500 00:10:53.607 [2024-07-24 07:00:08.060107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.060121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969f940 len:0x10000 key:0x182500 00:10:53.607 [2024-07-24 07:00:08.060133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.060146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968f880 len:0x10000 key:0x182500 00:10:53.607 [2024-07-24 07:00:08.060158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.607 [2024-07-24 07:00:08.060171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967f7c0 len:0x10000 key:0x182500 00:10:53.607 [2024-07-24 07:00:08.060183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966f700 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f640 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f580 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f4c0 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f400 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f340 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f280 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199d2e00 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199c2d40 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.060431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199b2c80 len:0x10000 key:0x182500 00:10:53.608 [2024-07-24 07:00:08.060443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.608 [2024-07-24 07:00:08.062777] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019902180 was disconnected and freed. reset controller. 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1510093 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:53.608 { 00:10:53.608 "params": { 00:10:53.608 "name": "Nvme$subsystem", 00:10:53.608 "trtype": "$TEST_TRANSPORT", 00:10:53.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.608 "adrfam": "ipv4", 00:10:53.608 "trsvcid": "$NVMF_PORT", 00:10:53.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.608 "hdgst": ${hdgst:-false}, 00:10:53.608 "ddgst": ${ddgst:-false} 00:10:53.608 }, 00:10:53.608 "method": "bdev_nvme_attach_controller" 00:10:53.608 } 00:10:53.608 EOF 00:10:53.608 )") 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:53.608 07:00:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:53.608 "params": { 00:10:53.608 "name": "Nvme0", 00:10:53.608 "trtype": "rdma", 00:10:53.608 "traddr": "192.168.100.8", 00:10:53.608 "adrfam": "ipv4", 00:10:53.608 "trsvcid": "4420", 00:10:53.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:53.608 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:53.608 "hdgst": false, 00:10:53.608 "ddgst": false 00:10:53.608 }, 00:10:53.608 "method": "bdev_nvme_attach_controller" 00:10:53.608 }' 00:10:53.608 [2024-07-24 07:00:08.148863] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:53.608 [2024-07-24 07:00:08.148962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510509 ] 00:10:53.608 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.867 [2024-07-24 07:00:08.300219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.126 [2024-07-24 07:00:08.526151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.384 Running I/O for 1 seconds... 00:10:55.398 00:10:55.398 Latency(us) 00:10:55.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.398 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:55.398 Verification LBA range: start 0x0 length 0x400 00:10:55.398 Nvme0n1 : 1.01 2792.35 174.52 0.00 0.00 22445.70 1271.40 37748.74 00:10:55.398 =================================================================================================================== 00:10:55.398 Total : 2792.35 174.52 0.00 0.00 22445.70 1271.40 37748.74 00:10:56.779 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1510093 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:56.779 rmmod nvme_rdma 00:10:56.779 rmmod nvme_fabrics 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1509784 ']' 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1509784 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1509784 ']' 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1509784 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1509784 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1509784' 00:10:56.779 killing process with pid 1509784 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1509784 00:10:56.779 07:00:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1509784 00:10:58.683 [2024-07-24 07:00:13.173864] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:58.683 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.683 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:58.683 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:58.683 00:10:58.683 real 0m17.468s 00:10:58.683 user 0m37.669s 00:10:58.683 sys 0m8.193s 00:10:58.683 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.683 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:58.683 ************************************ 00:10:58.683 END TEST nvmf_host_management 00:10:58.683 ************************************ 00:10:58.683 07:00:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:58.683 07:00:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:58.683 07:00:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.683 07:00:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.942 ************************************ 00:10:58.942 START TEST nvmf_lvol 00:10:58.942 ************************************ 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:58.942 * Looking for test storage... 00:10:58.942 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.942 07:00:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.059 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:07.060 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:07.060 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:07.060 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:07.060 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:07.060 07:00:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:07.060 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.060 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:07.060 altname enp217s0f0np0 00:11:07.060 altname ens818f0np0 00:11:07.060 inet 192.168.100.8/24 scope global mlx_0_0 00:11:07.060 valid_lft forever preferred_lft forever 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:07.060 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.060 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:07.060 altname enp217s0f1np1 00:11:07.060 altname ens818f1np1 00:11:07.060 inet 192.168.100.9/24 scope global mlx_0_1 00:11:07.060 valid_lft forever preferred_lft forever 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:07.060 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:07.061 192.168.100.9' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:07.061 192.168.100.9' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:07.061 192.168.100.9' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1515726 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1515726 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1515726 ']' 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:07.061 07:00:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:07.061 [2024-07-24 07:00:21.265350] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:07.061 [2024-07-24 07:00:21.265441] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.061 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.061 [2024-07-24 07:00:21.410777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.061 [2024-07-24 07:00:21.622911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.061 [2024-07-24 07:00:21.622956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.061 [2024-07-24 07:00:21.622973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.061 [2024-07-24 07:00:21.622984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.061 [2024-07-24 07:00:21.622996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.061 [2024-07-24 07:00:21.623072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.061 [2024-07-24 07:00:21.623137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.061 [2024-07-24 07:00:21.623146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.633 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.633 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:07.633 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.633 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:07.633 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:07.633 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.633 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:07.892 [2024-07-24 07:00:22.271029] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f269ed26940) succeed. 00:11:07.892 [2024-07-24 07:00:22.281106] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f269ece2940) succeed. 00:11:08.150 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.407 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:08.407 07:00:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.665 07:00:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:08.665 07:00:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:08.665 07:00:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:08.931 07:00:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=99508779-db74-4d9c-b797-53c5f890fcd1 00:11:08.931 07:00:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99508779-db74-4d9c-b797-53c5f890fcd1 lvol 20 00:11:09.190 07:00:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=918aee2a-6e3f-4980-b9f1-110346d97201 00:11:09.190 07:00:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:09.190 07:00:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 918aee2a-6e3f-4980-b9f1-110346d97201 00:11:09.448 07:00:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:09.706 [2024-07-24 07:00:24.127242] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:09.706 07:00:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:09.965 07:00:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1516328 00:11:09.965 07:00:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:09.965 07:00:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:09.965 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.901 07:00:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 918aee2a-6e3f-4980-b9f1-110346d97201 MY_SNAPSHOT 00:11:11.160 07:00:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9f7eeb83-ac50-4e21-a76d-55be9daf6017 00:11:11.160 07:00:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 918aee2a-6e3f-4980-b9f1-110346d97201 30 00:11:11.160 07:00:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9f7eeb83-ac50-4e21-a76d-55be9daf6017 MY_CLONE 00:11:11.419 07:00:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=220c803b-ca94-44cb-a5df-1172da94efb7 00:11:11.419 07:00:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 220c803b-ca94-44cb-a5df-1172da94efb7 00:11:11.677 07:00:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1516328 00:11:21.656 Initializing NVMe Controllers 00:11:21.656 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:11:21.656 Controller IO queue size 128, less than required. 00:11:21.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.656 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:21.656 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:21.656 Initialization complete. Launching workers. 00:11:21.656 ======================================================== 00:11:21.656 Latency(us) 00:11:21.656 Device Information : IOPS MiB/s Average min max 00:11:21.656 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15443.00 60.32 8290.67 3369.54 176718.41 00:11:21.656 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15426.50 60.26 8298.87 111.76 141515.56 00:11:21.656 ======================================================== 00:11:21.656 Total : 30869.49 120.58 8294.76 111.76 176718.41 00:11:21.656 00:11:21.656 07:00:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:21.656 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 918aee2a-6e3f-4980-b9f1-110346d97201 00:11:21.656 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99508779-db74-4d9c-b797-53c5f890fcd1 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:21.915 rmmod nvme_rdma 00:11:21.915 rmmod nvme_fabrics 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1515726 ']' 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1515726 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1515726 ']' 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1515726 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1515726 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1515726' 00:11:21.915 killing process with pid 1515726 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1515726 00:11:21.915 07:00:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1515726 00:11:24.473 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.473 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:24.473 00:11:24.473 real 0m25.238s 00:11:24.473 user 1m16.067s 00:11:24.473 sys 0m7.333s 00:11:24.473 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.473 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:24.473 ************************************ 00:11:24.473 END TEST nvmf_lvol 00:11:24.473 ************************************ 00:11:24.473 07:00:38 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:11:24.473 07:00:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:24.473 07:00:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.473 07:00:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:24.473 ************************************ 00:11:24.473 START TEST nvmf_lvs_grow 00:11:24.473 ************************************ 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:11:24.474 * Looking for test storage... 00:11:24.474 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:24.474 07:00:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.598 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:32.599 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:32.599 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:32.599 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:32.599 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:32.599 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.599 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:32.599 altname enp217s0f0np0 00:11:32.599 altname ens818f0np0 00:11:32.599 inet 192.168.100.8/24 scope global mlx_0_0 00:11:32.599 valid_lft forever preferred_lft forever 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:32.599 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:32.599 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.599 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:32.599 altname enp217s0f1np1 00:11:32.599 altname ens818f1np1 00:11:32.599 inet 192.168.100.9/24 scope global mlx_0_1 00:11:32.599 valid_lft forever preferred_lft forever 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:32.600 192.168.100.9' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:32.600 192.168.100.9' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:32.600 192.168.100.9' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1522659 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1522659 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1522659 ']' 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:32.600 07:00:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:32.600 [2024-07-24 07:00:46.898979] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:32.600 [2024-07-24 07:00:46.899071] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.600 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.600 [2024-07-24 07:00:47.048549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.859 [2024-07-24 07:00:47.254098] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.859 [2024-07-24 07:00:47.254145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.859 [2024-07-24 07:00:47.254159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.859 [2024-07-24 07:00:47.254191] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.859 [2024-07-24 07:00:47.254203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.859 [2024-07-24 07:00:47.254239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.118 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.118 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:33.118 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.118 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:33.118 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:33.118 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.118 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:33.376 [2024-07-24 07:00:47.877801] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f9e2815e940) succeed. 00:11:33.376 [2024-07-24 07:00:47.886766] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f9e28119940) succeed. 00:11:33.376 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:33.376 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:33.376 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.376 07:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:33.635 ************************************ 00:11:33.635 START TEST lvs_grow_clean 00:11:33.635 ************************************ 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:33.635 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:33.894 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=02896442-523a-4540-978e-73a0ab3e8c1e 00:11:33.894 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:33.894 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:34.152 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:34.152 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:34.152 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 02896442-523a-4540-978e-73a0ab3e8c1e lvol 150 00:11:34.153 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=080fcbe2-6a87-40b6-b2b7-3996ca062b37 00:11:34.153 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:34.153 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:34.412 [2024-07-24 07:00:48.903788] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:34.412 [2024-07-24 07:00:48.903873] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:34.412 true 00:11:34.412 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:34.412 07:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:34.671 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:34.671 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:34.671 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 080fcbe2-6a87-40b6-b2b7-3996ca062b37 00:11:34.930 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:35.189 [2024-07-24 07:00:49.602202] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1523230 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1523230 /var/tmp/bdevperf.sock 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1523230 ']' 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:35.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.189 07:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:35.448 [2024-07-24 07:00:49.843466] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:35.448 [2024-07-24 07:00:49.843579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523230 ] 00:11:35.448 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.448 [2024-07-24 07:00:49.990920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.707 [2024-07-24 07:00:50.209229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.276 07:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.276 07:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:36.276 07:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:36.276 Nvme0n1 00:11:36.276 07:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:36.535 [ 00:11:36.535 { 00:11:36.535 "name": "Nvme0n1", 00:11:36.535 "aliases": [ 00:11:36.535 "080fcbe2-6a87-40b6-b2b7-3996ca062b37" 00:11:36.535 ], 00:11:36.535 "product_name": "NVMe disk", 00:11:36.535 "block_size": 4096, 00:11:36.535 "num_blocks": 38912, 00:11:36.535 "uuid": "080fcbe2-6a87-40b6-b2b7-3996ca062b37", 00:11:36.535 "assigned_rate_limits": { 00:11:36.535 "rw_ios_per_sec": 0, 00:11:36.535 "rw_mbytes_per_sec": 0, 00:11:36.535 "r_mbytes_per_sec": 0, 00:11:36.535 "w_mbytes_per_sec": 0 00:11:36.535 }, 00:11:36.535 "claimed": false, 00:11:36.535 "zoned": false, 00:11:36.535 "supported_io_types": { 00:11:36.535 "read": true, 00:11:36.535 "write": true, 00:11:36.535 "unmap": true, 00:11:36.535 "flush": true, 00:11:36.535 "reset": true, 00:11:36.535 "nvme_admin": true, 00:11:36.535 "nvme_io": true, 00:11:36.535 "nvme_io_md": false, 00:11:36.535 "write_zeroes": true, 00:11:36.535 "zcopy": false, 00:11:36.535 "get_zone_info": false, 00:11:36.535 "zone_management": false, 00:11:36.535 "zone_append": false, 00:11:36.536 "compare": true, 00:11:36.536 "compare_and_write": true, 00:11:36.536 "abort": true, 00:11:36.536 "seek_hole": false, 00:11:36.536 "seek_data": false, 00:11:36.536 "copy": true, 00:11:36.536 "nvme_iov_md": false 00:11:36.536 }, 00:11:36.536 "memory_domains": [ 00:11:36.536 { 00:11:36.536 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:11:36.536 "dma_device_type": 0 00:11:36.536 } 00:11:36.536 ], 00:11:36.536 "driver_specific": { 00:11:36.536 "nvme": [ 00:11:36.536 { 00:11:36.536 "trid": { 00:11:36.536 "trtype": "RDMA", 00:11:36.536 "adrfam": "IPv4", 00:11:36.536 "traddr": "192.168.100.8", 00:11:36.536 "trsvcid": "4420", 00:11:36.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:36.536 }, 00:11:36.536 "ctrlr_data": { 00:11:36.536 "cntlid": 1, 00:11:36.536 "vendor_id": "0x8086", 00:11:36.536 "model_number": "SPDK bdev Controller", 00:11:36.536 "serial_number": "SPDK0", 00:11:36.536 "firmware_revision": "24.09", 00:11:36.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:36.536 "oacs": { 00:11:36.536 "security": 0, 00:11:36.536 "format": 0, 00:11:36.536 "firmware": 0, 00:11:36.536 "ns_manage": 0 00:11:36.536 }, 00:11:36.536 "multi_ctrlr": true, 00:11:36.536 "ana_reporting": false 00:11:36.536 }, 00:11:36.536 "vs": { 00:11:36.536 "nvme_version": "1.3" 00:11:36.536 }, 00:11:36.536 "ns_data": { 00:11:36.536 "id": 1, 00:11:36.536 "can_share": true 00:11:36.536 } 00:11:36.536 } 00:11:36.536 ], 00:11:36.536 "mp_policy": "active_passive" 00:11:36.536 } 00:11:36.536 } 00:11:36.536 ] 00:11:36.536 07:00:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1523504 00:11:36.536 07:00:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:36.536 07:00:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:36.536 Running I/O for 10 seconds... 00:11:37.915 Latency(us) 00:11:37.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.915 Nvme0n1 : 1.00 30980.00 121.02 0.00 0.00 0.00 0.00 0.00 00:11:37.915 =================================================================================================================== 00:11:37.915 Total : 30980.00 121.02 0.00 0.00 0.00 0.00 0.00 00:11:37.915 00:11:38.483 07:00:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:38.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.741 Nvme0n1 : 2.00 31185.50 121.82 0.00 0.00 0.00 0.00 0.00 00:11:38.741 =================================================================================================================== 00:11:38.741 Total : 31185.50 121.82 0.00 0.00 0.00 0.00 0.00 00:11:38.741 00:11:38.741 true 00:11:38.741 07:00:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:38.741 07:00:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:38.999 07:00:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:38.999 07:00:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:38.999 07:00:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1523504 00:11:39.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.568 Nvme0n1 : 3.00 31222.67 121.96 0.00 0.00 0.00 0.00 0.00 00:11:39.568 =================================================================================================================== 00:11:39.568 Total : 31222.67 121.96 0.00 0.00 0.00 0.00 0.00 00:11:39.568 00:11:40.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.948 Nvme0n1 : 4.00 31353.25 122.47 0.00 0.00 0.00 0.00 0.00 00:11:40.948 =================================================================================================================== 00:11:40.948 Total : 31353.25 122.47 0.00 0.00 0.00 0.00 0.00 00:11:40.948 00:11:41.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.885 Nvme0n1 : 5.00 31417.20 122.72 0.00 0.00 0.00 0.00 0.00 00:11:41.885 =================================================================================================================== 00:11:41.885 Total : 31417.20 122.72 0.00 0.00 0.00 0.00 0.00 00:11:41.885 00:11:42.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.823 Nvme0n1 : 6.00 31472.83 122.94 0.00 0.00 0.00 0.00 0.00 00:11:42.823 =================================================================================================================== 00:11:42.823 Total : 31472.83 122.94 0.00 0.00 0.00 0.00 0.00 00:11:42.823 00:11:43.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.806 Nvme0n1 : 7.00 31515.00 123.11 0.00 0.00 0.00 0.00 0.00 00:11:43.806 =================================================================================================================== 00:11:43.806 Total : 31515.00 123.11 0.00 0.00 0.00 0.00 0.00 00:11:43.806 00:11:44.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.743 Nvme0n1 : 8.00 31528.50 123.16 0.00 0.00 0.00 0.00 0.00 00:11:44.743 =================================================================================================================== 00:11:44.743 Total : 31528.50 123.16 0.00 0.00 0.00 0.00 0.00 00:11:44.743 00:11:45.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.681 Nvme0n1 : 9.00 31537.67 123.19 0.00 0.00 0.00 0.00 0.00 00:11:45.681 =================================================================================================================== 00:11:45.681 Total : 31537.67 123.19 0.00 0.00 0.00 0.00 0.00 00:11:45.681 00:11:46.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.615 Nvme0n1 : 10.00 31558.00 123.27 0.00 0.00 0.00 0.00 0.00 00:11:46.615 =================================================================================================================== 00:11:46.615 Total : 31558.00 123.27 0.00 0.00 0.00 0.00 0.00 00:11:46.615 00:11:46.615 00:11:46.615 Latency(us) 00:11:46.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.615 Nvme0n1 : 10.00 31558.58 123.28 0.00 0.00 4052.75 2569.01 9804.19 00:11:46.615 =================================================================================================================== 00:11:46.615 Total : 31558.58 123.28 0.00 0.00 4052.75 2569.01 9804.19 00:11:46.615 0 00:11:46.615 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1523230 00:11:46.615 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1523230 ']' 00:11:46.615 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1523230 00:11:46.615 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:46.615 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.615 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1523230 00:11:46.874 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:46.874 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:46.874 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1523230' 00:11:46.874 killing process with pid 1523230 00:11:46.874 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1523230 00:11:46.874 Received shutdown signal, test time was about 10.000000 seconds 00:11:46.874 00:11:46.874 Latency(us) 00:11:46.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.874 =================================================================================================================== 00:11:46.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:46.874 07:01:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1523230 00:11:47.812 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:48.071 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:48.071 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:48.071 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:48.330 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:48.330 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:48.330 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:48.330 [2024-07-24 07:01:02.949829] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:48.589 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:48.589 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:48.589 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:48.589 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:48.589 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.589 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:48.590 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.590 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:48.590 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.590 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:48.590 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:48.590 07:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:48.590 request: 00:11:48.590 { 00:11:48.590 "uuid": "02896442-523a-4540-978e-73a0ab3e8c1e", 00:11:48.590 "method": "bdev_lvol_get_lvstores", 00:11:48.590 "req_id": 1 00:11:48.590 } 00:11:48.590 Got JSON-RPC error response 00:11:48.590 response: 00:11:48.590 { 00:11:48.590 "code": -19, 00:11:48.590 "message": "No such device" 00:11:48.590 } 00:11:48.590 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:48.590 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:48.590 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:48.590 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:48.590 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:48.852 aio_bdev 00:11:48.852 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 080fcbe2-6a87-40b6-b2b7-3996ca062b37 00:11:48.852 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=080fcbe2-6a87-40b6-b2b7-3996ca062b37 00:11:48.852 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:48.852 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:48.852 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:48.852 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:48.852 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:49.112 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 080fcbe2-6a87-40b6-b2b7-3996ca062b37 -t 2000 00:11:49.112 [ 00:11:49.112 { 00:11:49.112 "name": "080fcbe2-6a87-40b6-b2b7-3996ca062b37", 00:11:49.112 "aliases": [ 00:11:49.112 "lvs/lvol" 00:11:49.112 ], 00:11:49.112 "product_name": "Logical Volume", 00:11:49.112 "block_size": 4096, 00:11:49.112 "num_blocks": 38912, 00:11:49.112 "uuid": "080fcbe2-6a87-40b6-b2b7-3996ca062b37", 00:11:49.112 "assigned_rate_limits": { 00:11:49.112 "rw_ios_per_sec": 0, 00:11:49.112 "rw_mbytes_per_sec": 0, 00:11:49.112 "r_mbytes_per_sec": 0, 00:11:49.112 "w_mbytes_per_sec": 0 00:11:49.112 }, 00:11:49.112 "claimed": false, 00:11:49.112 "zoned": false, 00:11:49.112 "supported_io_types": { 00:11:49.112 "read": true, 00:11:49.112 "write": true, 00:11:49.112 "unmap": true, 00:11:49.112 "flush": false, 00:11:49.112 "reset": true, 00:11:49.112 "nvme_admin": false, 00:11:49.112 "nvme_io": false, 00:11:49.112 "nvme_io_md": false, 00:11:49.112 "write_zeroes": true, 00:11:49.112 "zcopy": false, 00:11:49.112 "get_zone_info": false, 00:11:49.112 "zone_management": false, 00:11:49.112 "zone_append": false, 00:11:49.112 "compare": false, 00:11:49.112 "compare_and_write": false, 00:11:49.112 "abort": false, 00:11:49.112 "seek_hole": true, 00:11:49.112 "seek_data": true, 00:11:49.112 "copy": false, 00:11:49.112 "nvme_iov_md": false 00:11:49.112 }, 00:11:49.112 "driver_specific": { 00:11:49.112 "lvol": { 00:11:49.112 "lvol_store_uuid": "02896442-523a-4540-978e-73a0ab3e8c1e", 00:11:49.112 "base_bdev": "aio_bdev", 00:11:49.112 "thin_provision": false, 00:11:49.112 "num_allocated_clusters": 38, 00:11:49.112 "snapshot": false, 00:11:49.112 "clone": false, 00:11:49.112 "esnap_clone": false 00:11:49.112 } 00:11:49.112 } 00:11:49.112 } 00:11:49.112 ] 00:11:49.112 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:49.112 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:49.112 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:49.372 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:49.372 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:49.372 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:49.372 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:49.372 07:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 080fcbe2-6a87-40b6-b2b7-3996ca062b37 00:11:49.631 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 02896442-523a-4540-978e-73a0ab3e8c1e 00:11:49.890 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:49.890 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:49.890 00:11:49.890 real 0m16.475s 00:11:49.890 user 0m16.258s 00:11:49.890 sys 0m1.312s 00:11:49.890 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.890 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:49.890 ************************************ 00:11:49.890 END TEST lvs_grow_clean 00:11:49.890 ************************************ 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:50.150 ************************************ 00:11:50.150 START TEST lvs_grow_dirty 00:11:50.150 ************************************ 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:50.150 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:50.409 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:50.409 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:50.409 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:11:50.409 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:11:50.409 07:01:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:50.668 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:50.668 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:50.668 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b lvol 150 00:11:50.926 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f750477a-ac3d-4a52-b519-9fee2b589946 00:11:50.926 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:50.926 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:50.926 [2024-07-24 07:01:05.502602] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:50.926 [2024-07-24 07:01:05.502681] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:50.926 true 00:11:50.926 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:11:50.926 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:51.188 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:51.188 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:51.448 07:01:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f750477a-ac3d-4a52-b519-9fee2b589946 00:11:51.448 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:51.706 [2024-07-24 07:01:06.156798] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:51.706 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1526029 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1526029 /var/tmp/bdevperf.sock 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1526029 ']' 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:51.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.964 07:01:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:51.964 [2024-07-24 07:01:06.420937] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:51.964 [2024-07-24 07:01:06.421052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526029 ] 00:11:51.964 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.964 [2024-07-24 07:01:06.567775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.223 [2024-07-24 07:01:06.779517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.789 07:01:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.789 07:01:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:52.789 07:01:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:52.789 Nvme0n1 00:11:53.048 07:01:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:53.048 [ 00:11:53.048 { 00:11:53.048 "name": "Nvme0n1", 00:11:53.048 "aliases": [ 00:11:53.048 "f750477a-ac3d-4a52-b519-9fee2b589946" 00:11:53.048 ], 00:11:53.048 "product_name": "NVMe disk", 00:11:53.048 "block_size": 4096, 00:11:53.048 "num_blocks": 38912, 00:11:53.048 "uuid": "f750477a-ac3d-4a52-b519-9fee2b589946", 00:11:53.048 "assigned_rate_limits": { 00:11:53.048 "rw_ios_per_sec": 0, 00:11:53.048 "rw_mbytes_per_sec": 0, 00:11:53.048 "r_mbytes_per_sec": 0, 00:11:53.048 "w_mbytes_per_sec": 0 00:11:53.048 }, 00:11:53.048 "claimed": false, 00:11:53.048 "zoned": false, 00:11:53.048 "supported_io_types": { 00:11:53.048 "read": true, 00:11:53.048 "write": true, 00:11:53.048 "unmap": true, 00:11:53.048 "flush": true, 00:11:53.048 "reset": true, 00:11:53.048 "nvme_admin": true, 00:11:53.048 "nvme_io": true, 00:11:53.048 "nvme_io_md": false, 00:11:53.048 "write_zeroes": true, 00:11:53.048 "zcopy": false, 00:11:53.048 "get_zone_info": false, 00:11:53.048 "zone_management": false, 00:11:53.048 "zone_append": false, 00:11:53.048 "compare": true, 00:11:53.048 "compare_and_write": true, 00:11:53.048 "abort": true, 00:11:53.048 "seek_hole": false, 00:11:53.048 "seek_data": false, 00:11:53.048 "copy": true, 00:11:53.048 "nvme_iov_md": false 00:11:53.048 }, 00:11:53.048 "memory_domains": [ 00:11:53.048 { 00:11:53.048 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:11:53.048 "dma_device_type": 0 00:11:53.048 } 00:11:53.048 ], 00:11:53.048 "driver_specific": { 00:11:53.048 "nvme": [ 00:11:53.048 { 00:11:53.048 "trid": { 00:11:53.048 "trtype": "RDMA", 00:11:53.048 "adrfam": "IPv4", 00:11:53.048 "traddr": "192.168.100.8", 00:11:53.048 "trsvcid": "4420", 00:11:53.048 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:53.048 }, 00:11:53.048 "ctrlr_data": { 00:11:53.048 "cntlid": 1, 00:11:53.048 "vendor_id": "0x8086", 00:11:53.048 "model_number": "SPDK bdev Controller", 00:11:53.048 "serial_number": "SPDK0", 00:11:53.048 "firmware_revision": "24.09", 00:11:53.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:53.048 "oacs": { 00:11:53.048 "security": 0, 00:11:53.048 "format": 0, 00:11:53.048 "firmware": 0, 00:11:53.048 "ns_manage": 0 00:11:53.048 }, 00:11:53.048 "multi_ctrlr": true, 00:11:53.048 "ana_reporting": false 00:11:53.048 }, 00:11:53.048 "vs": { 00:11:53.048 "nvme_version": "1.3" 00:11:53.048 }, 00:11:53.048 "ns_data": { 00:11:53.048 "id": 1, 00:11:53.048 "can_share": true 00:11:53.048 } 00:11:53.048 } 00:11:53.048 ], 00:11:53.048 "mp_policy": "active_passive" 00:11:53.048 } 00:11:53.048 } 00:11:53.048 ] 00:11:53.048 07:01:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1526244 00:11:53.048 07:01:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:53.048 07:01:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:53.307 Running I/O for 10 seconds... 00:11:54.243 Latency(us) 00:11:54.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.243 Nvme0n1 : 1.00 30751.00 120.12 0.00 0.00 0.00 0.00 0.00 00:11:54.243 =================================================================================================================== 00:11:54.243 Total : 30751.00 120.12 0.00 0.00 0.00 0.00 0.00 00:11:54.243 00:11:55.180 07:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:11:55.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.180 Nvme0n1 : 2.00 31039.50 121.25 0.00 0.00 0.00 0.00 0.00 00:11:55.180 =================================================================================================================== 00:11:55.180 Total : 31039.50 121.25 0.00 0.00 0.00 0.00 0.00 00:11:55.180 00:11:55.180 true 00:11:55.180 07:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:11:55.180 07:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:55.438 07:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:55.438 07:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:55.438 07:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1526244 00:11:56.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.374 Nvme0n1 : 3.00 31019.33 121.17 0.00 0.00 0.00 0.00 0.00 00:11:56.374 =================================================================================================================== 00:11:56.374 Total : 31019.33 121.17 0.00 0.00 0.00 0.00 0.00 00:11:56.374 00:11:57.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.310 Nvme0n1 : 4.00 31144.75 121.66 0.00 0.00 0.00 0.00 0.00 00:11:57.310 =================================================================================================================== 00:11:57.310 Total : 31144.75 121.66 0.00 0.00 0.00 0.00 0.00 00:11:57.310 00:11:58.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.245 Nvme0n1 : 5.00 31251.20 122.08 0.00 0.00 0.00 0.00 0.00 00:11:58.246 =================================================================================================================== 00:11:58.246 Total : 31251.20 122.08 0.00 0.00 0.00 0.00 0.00 00:11:58.246 00:11:59.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.206 Nvme0n1 : 6.00 31328.17 122.38 0.00 0.00 0.00 0.00 0.00 00:11:59.206 =================================================================================================================== 00:11:59.206 Total : 31328.17 122.38 0.00 0.00 0.00 0.00 0.00 00:11:59.206 00:12:00.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.149 Nvme0n1 : 7.00 31374.71 122.56 0.00 0.00 0.00 0.00 0.00 00:12:00.149 =================================================================================================================== 00:12:00.149 Total : 31374.71 122.56 0.00 0.00 0.00 0.00 0.00 00:12:00.149 00:12:01.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.528 Nvme0n1 : 8.00 31414.75 122.71 0.00 0.00 0.00 0.00 0.00 00:12:01.528 =================================================================================================================== 00:12:01.528 Total : 31414.75 122.71 0.00 0.00 0.00 0.00 0.00 00:12:01.528 00:12:02.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.095 Nvme0n1 : 9.00 31449.89 122.85 0.00 0.00 0.00 0.00 0.00 00:12:02.095 =================================================================================================================== 00:12:02.095 Total : 31449.89 122.85 0.00 0.00 0.00 0.00 0.00 00:12:02.095 00:12:03.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.473 Nvme0n1 : 10.00 31476.00 122.95 0.00 0.00 0.00 0.00 0.00 00:12:03.473 =================================================================================================================== 00:12:03.473 Total : 31476.00 122.95 0.00 0.00 0.00 0.00 0.00 00:12:03.473 00:12:03.473 00:12:03.473 Latency(us) 00:12:03.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.474 Nvme0n1 : 10.00 31474.76 122.95 0.00 0.00 4063.47 2988.44 15833.50 00:12:03.474 =================================================================================================================== 00:12:03.474 Total : 31474.76 122.95 0.00 0.00 4063.47 2988.44 15833.50 00:12:03.474 0 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1526029 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1526029 ']' 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1526029 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1526029 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1526029' 00:12:03.474 killing process with pid 1526029 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1526029 00:12:03.474 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.474 00:12:03.474 Latency(us) 00:12:03.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.474 =================================================================================================================== 00:12:03.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:03.474 07:01:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1526029 00:12:04.410 07:01:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:04.410 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:04.669 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:12:04.669 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:04.928 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1522659 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1522659 00:12:04.929 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1522659 Killed "${NVMF_APP[@]}" "$@" 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1528369 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1528369 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1528369 ']' 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.929 07:01:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:04.929 [2024-07-24 07:01:19.526529] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:04.929 [2024-07-24 07:01:19.526622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.187 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.187 [2024-07-24 07:01:19.679642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.446 [2024-07-24 07:01:19.880889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.446 [2024-07-24 07:01:19.880936] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.446 [2024-07-24 07:01:19.880952] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.446 [2024-07-24 07:01:19.880982] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.446 [2024-07-24 07:01:19.880993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.446 [2024-07-24 07:01:19.881030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.705 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.705 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:05.705 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.705 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.705 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:05.705 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.705 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:05.964 [2024-07-24 07:01:20.491230] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:05.964 [2024-07-24 07:01:20.491371] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:05.964 [2024-07-24 07:01:20.491411] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:05.964 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:05.964 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f750477a-ac3d-4a52-b519-9fee2b589946 00:12:05.964 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f750477a-ac3d-4a52-b519-9fee2b589946 00:12:05.964 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:05.964 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:05.964 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:05.964 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:05.964 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:06.223 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f750477a-ac3d-4a52-b519-9fee2b589946 -t 2000 00:12:06.223 [ 00:12:06.223 { 00:12:06.223 "name": "f750477a-ac3d-4a52-b519-9fee2b589946", 00:12:06.223 "aliases": [ 00:12:06.223 "lvs/lvol" 00:12:06.223 ], 00:12:06.223 "product_name": "Logical Volume", 00:12:06.223 "block_size": 4096, 00:12:06.223 "num_blocks": 38912, 00:12:06.223 "uuid": "f750477a-ac3d-4a52-b519-9fee2b589946", 00:12:06.223 "assigned_rate_limits": { 00:12:06.223 "rw_ios_per_sec": 0, 00:12:06.223 "rw_mbytes_per_sec": 0, 00:12:06.223 "r_mbytes_per_sec": 0, 00:12:06.223 "w_mbytes_per_sec": 0 00:12:06.223 }, 00:12:06.223 "claimed": false, 00:12:06.223 "zoned": false, 00:12:06.223 "supported_io_types": { 00:12:06.223 "read": true, 00:12:06.223 "write": true, 00:12:06.223 "unmap": true, 00:12:06.223 "flush": false, 00:12:06.223 "reset": true, 00:12:06.223 "nvme_admin": false, 00:12:06.223 "nvme_io": false, 00:12:06.223 "nvme_io_md": false, 00:12:06.223 "write_zeroes": true, 00:12:06.223 "zcopy": false, 00:12:06.223 "get_zone_info": false, 00:12:06.223 "zone_management": false, 00:12:06.223 "zone_append": false, 00:12:06.223 "compare": false, 00:12:06.223 "compare_and_write": false, 00:12:06.223 "abort": false, 00:12:06.223 "seek_hole": true, 00:12:06.223 "seek_data": true, 00:12:06.223 "copy": false, 00:12:06.223 "nvme_iov_md": false 00:12:06.223 }, 00:12:06.223 "driver_specific": { 00:12:06.223 "lvol": { 00:12:06.223 "lvol_store_uuid": "5ce7a1e4-75fe-4090-afa8-fa1a57facf8b", 00:12:06.223 "base_bdev": "aio_bdev", 00:12:06.223 "thin_provision": false, 00:12:06.223 "num_allocated_clusters": 38, 00:12:06.223 "snapshot": false, 00:12:06.223 "clone": false, 00:12:06.223 "esnap_clone": false 00:12:06.223 } 00:12:06.223 } 00:12:06.223 } 00:12:06.223 ] 00:12:06.483 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:06.483 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:06.483 07:01:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:12:06.483 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:06.483 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:06.483 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:12:06.742 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:06.742 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:06.742 [2024-07-24 07:01:21.367326] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:07.001 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:12:07.001 request: 00:12:07.001 { 00:12:07.001 "uuid": "5ce7a1e4-75fe-4090-afa8-fa1a57facf8b", 00:12:07.001 "method": "bdev_lvol_get_lvstores", 00:12:07.001 "req_id": 1 00:12:07.001 } 00:12:07.001 Got JSON-RPC error response 00:12:07.001 response: 00:12:07.001 { 00:12:07.001 "code": -19, 00:12:07.002 "message": "No such device" 00:12:07.002 } 00:12:07.002 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:07.002 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.002 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.002 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.002 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:07.261 aio_bdev 00:12:07.261 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f750477a-ac3d-4a52-b519-9fee2b589946 00:12:07.261 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f750477a-ac3d-4a52-b519-9fee2b589946 00:12:07.261 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:07.261 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:07.261 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:07.261 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:07.261 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:07.520 07:01:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f750477a-ac3d-4a52-b519-9fee2b589946 -t 2000 00:12:07.520 [ 00:12:07.520 { 00:12:07.520 "name": "f750477a-ac3d-4a52-b519-9fee2b589946", 00:12:07.520 "aliases": [ 00:12:07.520 "lvs/lvol" 00:12:07.520 ], 00:12:07.520 "product_name": "Logical Volume", 00:12:07.520 "block_size": 4096, 00:12:07.520 "num_blocks": 38912, 00:12:07.520 "uuid": "f750477a-ac3d-4a52-b519-9fee2b589946", 00:12:07.520 "assigned_rate_limits": { 00:12:07.520 "rw_ios_per_sec": 0, 00:12:07.520 "rw_mbytes_per_sec": 0, 00:12:07.520 "r_mbytes_per_sec": 0, 00:12:07.520 "w_mbytes_per_sec": 0 00:12:07.520 }, 00:12:07.520 "claimed": false, 00:12:07.520 "zoned": false, 00:12:07.520 "supported_io_types": { 00:12:07.520 "read": true, 00:12:07.520 "write": true, 00:12:07.520 "unmap": true, 00:12:07.520 "flush": false, 00:12:07.520 "reset": true, 00:12:07.520 "nvme_admin": false, 00:12:07.520 "nvme_io": false, 00:12:07.520 "nvme_io_md": false, 00:12:07.520 "write_zeroes": true, 00:12:07.520 "zcopy": false, 00:12:07.520 "get_zone_info": false, 00:12:07.520 "zone_management": false, 00:12:07.520 "zone_append": false, 00:12:07.520 "compare": false, 00:12:07.520 "compare_and_write": false, 00:12:07.520 "abort": false, 00:12:07.520 "seek_hole": true, 00:12:07.520 "seek_data": true, 00:12:07.520 "copy": false, 00:12:07.520 "nvme_iov_md": false 00:12:07.520 }, 00:12:07.520 "driver_specific": { 00:12:07.520 "lvol": { 00:12:07.520 "lvol_store_uuid": "5ce7a1e4-75fe-4090-afa8-fa1a57facf8b", 00:12:07.520 "base_bdev": "aio_bdev", 00:12:07.520 "thin_provision": false, 00:12:07.520 "num_allocated_clusters": 38, 00:12:07.520 "snapshot": false, 00:12:07.520 "clone": false, 00:12:07.520 "esnap_clone": false 00:12:07.520 } 00:12:07.520 } 00:12:07.520 } 00:12:07.520 ] 00:12:07.520 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:07.520 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:12:07.520 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:07.780 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:07.780 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:12:07.780 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:08.039 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:08.039 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f750477a-ac3d-4a52-b519-9fee2b589946 00:12:08.039 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ce7a1e4-75fe-4090-afa8-fa1a57facf8b 00:12:08.298 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:08.557 07:01:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:08.557 00:12:08.557 real 0m18.425s 00:12:08.557 user 0m47.576s 00:12:08.557 sys 0m3.641s 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:08.557 ************************************ 00:12:08.557 END TEST lvs_grow_dirty 00:12:08.557 ************************************ 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:08.557 nvmf_trace.0 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:08.557 rmmod nvme_rdma 00:12:08.557 rmmod nvme_fabrics 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1528369 ']' 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1528369 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1528369 ']' 00:12:08.557 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1528369 00:12:08.558 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:08.558 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:08.558 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1528369 00:12:08.817 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:08.817 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:08.817 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1528369' 00:12:08.817 killing process with pid 1528369 00:12:08.817 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1528369 00:12:08.817 07:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1528369 00:12:10.195 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.195 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:10.195 00:12:10.195 real 0m45.794s 00:12:10.195 user 1m11.236s 00:12:10.195 sys 0m11.713s 00:12:10.195 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.195 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.196 ************************************ 00:12:10.196 END TEST nvmf_lvs_grow 00:12:10.196 ************************************ 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:10.196 ************************************ 00:12:10.196 START TEST nvmf_bdev_io_wait 00:12:10.196 ************************************ 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:12:10.196 * Looking for test storage... 00:12:10.196 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.196 07:01:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:18.318 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:18.318 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:18.318 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:18.318 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:18.318 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:18.579 07:01:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:18.579 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.579 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:18.579 altname enp217s0f0np0 00:12:18.579 altname ens818f0np0 00:12:18.579 inet 192.168.100.8/24 scope global mlx_0_0 00:12:18.579 valid_lft forever preferred_lft forever 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:18.579 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.579 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:18.579 altname enp217s0f1np1 00:12:18.579 altname ens818f1np1 00:12:18.579 inet 192.168.100.9/24 scope global mlx_0_1 00:12:18.579 valid_lft forever preferred_lft forever 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:18.579 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:18.580 192.168.100.9' 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:18.580 192.168.100.9' 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:18.580 192.168.100.9' 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1533236 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1533236 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1533236 ']' 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.580 07:01:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:18.839 [2024-07-24 07:01:33.239914] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:18.839 [2024-07-24 07:01:33.240006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.839 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.839 [2024-07-24 07:01:33.389127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.098 [2024-07-24 07:01:33.603364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.098 [2024-07-24 07:01:33.603408] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.098 [2024-07-24 07:01:33.603422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.098 [2024-07-24 07:01:33.603433] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.098 [2024-07-24 07:01:33.603444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.098 [2024-07-24 07:01:33.603571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.098 [2024-07-24 07:01:33.603676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.098 [2024-07-24 07:01:33.603701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.098 [2024-07-24 07:01:33.603713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.665 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.980 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.980 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:19.980 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.980 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.980 [2024-07-24 07:01:34.359239] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f6a02a8a940) succeed. 00:12:19.980 [2024-07-24 07:01:34.368417] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f6a02a43940) succeed. 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.243 Malloc0 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.243 [2024-07-24 07:01:34.788194] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1533568 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1533572 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:20.243 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.244 { 00:12:20.244 "params": { 00:12:20.244 "name": "Nvme$subsystem", 00:12:20.244 "trtype": "$TEST_TRANSPORT", 00:12:20.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.244 "adrfam": "ipv4", 00:12:20.244 "trsvcid": "$NVMF_PORT", 00:12:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.244 "hdgst": ${hdgst:-false}, 00:12:20.244 "ddgst": ${ddgst:-false} 00:12:20.244 }, 00:12:20.244 "method": "bdev_nvme_attach_controller" 00:12:20.244 } 00:12:20.244 EOF 00:12:20.244 )") 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1533575 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.244 { 00:12:20.244 "params": { 00:12:20.244 "name": "Nvme$subsystem", 00:12:20.244 "trtype": "$TEST_TRANSPORT", 00:12:20.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.244 "adrfam": "ipv4", 00:12:20.244 "trsvcid": "$NVMF_PORT", 00:12:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.244 "hdgst": ${hdgst:-false}, 00:12:20.244 "ddgst": ${ddgst:-false} 00:12:20.244 }, 00:12:20.244 "method": "bdev_nvme_attach_controller" 00:12:20.244 } 00:12:20.244 EOF 00:12:20.244 )") 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1533579 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.244 { 00:12:20.244 "params": { 00:12:20.244 "name": "Nvme$subsystem", 00:12:20.244 "trtype": "$TEST_TRANSPORT", 00:12:20.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.244 "adrfam": "ipv4", 00:12:20.244 "trsvcid": "$NVMF_PORT", 00:12:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.244 "hdgst": ${hdgst:-false}, 00:12:20.244 "ddgst": ${ddgst:-false} 00:12:20.244 }, 00:12:20.244 "method": "bdev_nvme_attach_controller" 00:12:20.244 } 00:12:20.244 EOF 00:12:20.244 )") 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.244 { 00:12:20.244 "params": { 00:12:20.244 "name": "Nvme$subsystem", 00:12:20.244 "trtype": "$TEST_TRANSPORT", 00:12:20.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.244 "adrfam": "ipv4", 00:12:20.244 "trsvcid": "$NVMF_PORT", 00:12:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.244 "hdgst": ${hdgst:-false}, 00:12:20.244 "ddgst": ${ddgst:-false} 00:12:20.244 }, 00:12:20.244 "method": "bdev_nvme_attach_controller" 00:12:20.244 } 00:12:20.244 EOF 00:12:20.244 )") 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1533568 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.244 "params": { 00:12:20.244 "name": "Nvme1", 00:12:20.244 "trtype": "rdma", 00:12:20.244 "traddr": "192.168.100.8", 00:12:20.244 "adrfam": "ipv4", 00:12:20.244 "trsvcid": "4420", 00:12:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.244 "hdgst": false, 00:12:20.244 "ddgst": false 00:12:20.244 }, 00:12:20.244 "method": "bdev_nvme_attach_controller" 00:12:20.244 }' 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.244 "params": { 00:12:20.244 "name": "Nvme1", 00:12:20.244 "trtype": "rdma", 00:12:20.244 "traddr": "192.168.100.8", 00:12:20.244 "adrfam": "ipv4", 00:12:20.244 "trsvcid": "4420", 00:12:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.244 "hdgst": false, 00:12:20.244 "ddgst": false 00:12:20.244 }, 00:12:20.244 "method": "bdev_nvme_attach_controller" 00:12:20.244 }' 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.244 "params": { 00:12:20.244 "name": "Nvme1", 00:12:20.244 "trtype": "rdma", 00:12:20.244 "traddr": "192.168.100.8", 00:12:20.244 "adrfam": "ipv4", 00:12:20.244 "trsvcid": "4420", 00:12:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.244 "hdgst": false, 00:12:20.244 "ddgst": false 00:12:20.244 }, 00:12:20.244 "method": "bdev_nvme_attach_controller" 00:12:20.244 }' 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:20.244 07:01:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.244 "params": { 00:12:20.244 "name": "Nvme1", 00:12:20.244 "trtype": "rdma", 00:12:20.244 "traddr": "192.168.100.8", 00:12:20.244 "adrfam": "ipv4", 00:12:20.244 "trsvcid": "4420", 00:12:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.244 "hdgst": false, 00:12:20.244 "ddgst": false 00:12:20.244 }, 00:12:20.244 "method": "bdev_nvme_attach_controller" 00:12:20.244 }' 00:12:20.244 [2024-07-24 07:01:34.873116] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:20.244 [2024-07-24 07:01:34.873222] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:20.504 [2024-07-24 07:01:34.875228] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:20.504 [2024-07-24 07:01:34.875321] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:20.504 [2024-07-24 07:01:34.877247] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:20.504 [2024-07-24 07:01:34.877334] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:20.504 [2024-07-24 07:01:34.878490] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:20.504 [2024-07-24 07:01:34.878573] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:20.504 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.504 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.763 [2024-07-24 07:01:35.144236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.763 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.763 [2024-07-24 07:01:35.240838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.763 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.763 [2024-07-24 07:01:35.338540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.763 [2024-07-24 07:01:35.359181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:21.023 [2024-07-24 07:01:35.395384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.023 [2024-07-24 07:01:35.440478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:21.023 [2024-07-24 07:01:35.567898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:21.023 [2024-07-24 07:01:35.592089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:21.282 Running I/O for 1 seconds... 00:12:21.282 Running I/O for 1 seconds... 00:12:21.541 Running I/O for 1 seconds... 00:12:21.541 Running I/O for 1 seconds... 00:12:22.479 00:12:22.479 Latency(us) 00:12:22.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.479 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:22.479 Nvme1n1 : 1.01 18339.50 71.64 0.00 0.00 6956.48 4849.66 11691.62 00:12:22.479 =================================================================================================================== 00:12:22.479 Total : 18339.50 71.64 0.00 0.00 6956.48 4849.66 11691.62 00:12:22.479 00:12:22.479 Latency(us) 00:12:22.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.479 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:22.479 Nvme1n1 : 1.00 233751.25 913.09 0.00 0.00 545.64 221.18 2424.83 00:12:22.480 =================================================================================================================== 00:12:22.480 Total : 233751.25 913.09 0.00 0.00 545.64 221.18 2424.83 00:12:22.480 00:12:22.480 Latency(us) 00:12:22.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.480 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:22.480 Nvme1n1 : 1.01 16385.35 64.01 0.00 0.00 7786.81 5190.45 25480.40 00:12:22.480 =================================================================================================================== 00:12:22.480 Total : 16385.35 64.01 0.00 0.00 7786.81 5190.45 25480.40 00:12:22.480 00:12:22.480 Latency(us) 00:12:22.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.480 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:22.480 Nvme1n1 : 1.01 14471.78 56.53 0.00 0.00 8817.08 5557.45 26424.12 00:12:22.480 =================================================================================================================== 00:12:22.480 Total : 14471.78 56.53 0.00 0.00 8817.08 5557.45 26424.12 00:12:23.417 07:01:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1533572 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1533575 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1533579 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:23.677 rmmod nvme_rdma 00:12:23.677 rmmod nvme_fabrics 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1533236 ']' 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1533236 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1533236 ']' 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1533236 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1533236 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1533236' 00:12:23.677 killing process with pid 1533236 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1533236 00:12:23.677 07:01:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1533236 00:12:25.583 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.583 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:25.583 00:12:25.583 real 0m15.560s 00:12:25.583 user 0m36.787s 00:12:25.583 sys 0m8.492s 00:12:25.583 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.583 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:25.583 ************************************ 00:12:25.583 END TEST nvmf_bdev_io_wait 00:12:25.583 ************************************ 00:12:25.583 07:01:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:25.583 07:01:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:25.583 07:01:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.583 07:01:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:25.583 ************************************ 00:12:25.583 START TEST nvmf_queue_depth 00:12:25.583 ************************************ 00:12:25.583 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:25.843 * Looking for test storage... 00:12:25.843 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.843 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.844 07:01:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:33.968 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:33.968 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:33.968 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:33.968 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:33.968 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:33.968 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:33.968 altname enp217s0f0np0 00:12:33.968 altname ens818f0np0 00:12:33.968 inet 192.168.100.8/24 scope global mlx_0_0 00:12:33.968 valid_lft forever preferred_lft forever 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:33.968 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:33.968 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:33.968 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:33.968 altname enp217s0f1np1 00:12:33.968 altname ens818f1np1 00:12:33.968 inet 192.168.100.9/24 scope global mlx_0_1 00:12:33.968 valid_lft forever preferred_lft forever 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:33.969 192.168.100.9' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:33.969 192.168.100.9' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:33.969 192.168.100.9' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1538440 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1538440 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1538440 ']' 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.969 07:01:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:33.969 [2024-07-24 07:01:48.548749] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:33.969 [2024-07-24 07:01:48.548862] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.227 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.227 [2024-07-24 07:01:48.698660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.486 [2024-07-24 07:01:48.902357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.486 [2024-07-24 07:01:48.902399] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.486 [2024-07-24 07:01:48.902414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.486 [2024-07-24 07:01:48.902428] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.486 [2024-07-24 07:01:48.902439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.486 [2024-07-24 07:01:48.902468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.745 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.745 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:34.745 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.745 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.745 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:34.745 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.745 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:34.745 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.745 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.004 [2024-07-24 07:01:49.387539] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7fa1bf16c940) succeed. 00:12:35.004 [2024-07-24 07:01:49.396443] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7fa1bf125940) succeed. 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.004 Malloc0 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.004 [2024-07-24 07:01:49.591179] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1538719 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1538719 /var/tmp/bdevperf.sock 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1538719 ']' 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:35.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:35.004 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.005 07:01:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.263 [2024-07-24 07:01:49.673569] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:35.263 [2024-07-24 07:01:49.673673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538719 ] 00:12:35.263 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.263 [2024-07-24 07:01:49.821058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.522 [2024-07-24 07:01:50.034021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.089 07:01:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.089 07:01:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:36.090 07:01:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:36.090 07:01:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.090 07:01:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.090 NVMe0n1 00:12:36.090 07:01:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.090 07:01:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:36.090 Running I/O for 10 seconds... 00:12:48.301 00:12:48.301 Latency(us) 00:12:48.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.301 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:48.301 Verification LBA range: start 0x0 length 0x4000 00:12:48.301 NVMe0n1 : 10.05 15969.51 62.38 0.00 0.00 63911.11 17720.93 40684.75 00:12:48.301 =================================================================================================================== 00:12:48.301 Total : 15969.51 62.38 0.00 0.00 63911.11 17720.93 40684.75 00:12:48.301 0 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1538719 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1538719 ']' 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1538719 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1538719 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1538719' 00:12:48.301 killing process with pid 1538719 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1538719 00:12:48.301 Received shutdown signal, test time was about 10.000000 seconds 00:12:48.301 00:12:48.301 Latency(us) 00:12:48.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.301 =================================================================================================================== 00:12:48.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.301 07:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1538719 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:48.301 rmmod nvme_rdma 00:12:48.301 rmmod nvme_fabrics 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1538440 ']' 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1538440 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1538440 ']' 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1538440 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1538440 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1538440' 00:12:48.301 killing process with pid 1538440 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1538440 00:12:48.301 07:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1538440 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:49.289 00:12:49.289 real 0m23.321s 00:12:49.289 user 0m29.375s 00:12:49.289 sys 0m7.159s 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:49.289 ************************************ 00:12:49.289 END TEST nvmf_queue_depth 00:12:49.289 ************************************ 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:49.289 ************************************ 00:12:49.289 START TEST nvmf_target_multipath 00:12:49.289 ************************************ 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:12:49.289 * Looking for test storage... 00:12:49.289 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.289 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.290 07:02:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.412 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:57.413 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:57.413 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:57.413 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:57.413 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:57.413 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:57.414 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:57.414 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:57.414 altname enp217s0f0np0 00:12:57.414 altname ens818f0np0 00:12:57.414 inet 192.168.100.8/24 scope global mlx_0_0 00:12:57.414 valid_lft forever preferred_lft forever 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:57.414 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:57.414 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:57.414 altname enp217s0f1np1 00:12:57.414 altname ens818f1np1 00:12:57.414 inet 192.168.100.9/24 scope global mlx_0_1 00:12:57.414 valid_lft forever preferred_lft forever 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:57.414 192.168.100.9' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:57.414 192.168.100.9' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:57.414 192.168.100.9' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:12:57.414 run this test only with TCP transport for now 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:57.414 rmmod nvme_rdma 00:12:57.414 rmmod nvme_fabrics 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:57.414 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:57.415 00:12:57.415 real 0m8.333s 00:12:57.415 user 0m2.265s 00:12:57.415 sys 0m6.304s 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:57.415 ************************************ 00:12:57.415 END TEST nvmf_target_multipath 00:12:57.415 ************************************ 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.415 07:02:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:57.415 ************************************ 00:12:57.415 START TEST nvmf_zcopy 00:12:57.415 ************************************ 00:12:57.415 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:12:57.674 * Looking for test storage... 00:12:57.674 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.674 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.675 07:02:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.797 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.797 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:05.797 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:05.797 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:05.797 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:05.797 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:05.797 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:05.797 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:05.797 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:05.798 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:05.798 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:05.798 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:05.798 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:05.798 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:05.798 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:05.798 altname enp217s0f0np0 00:13:05.798 altname ens818f0np0 00:13:05.798 inet 192.168.100.8/24 scope global mlx_0_0 00:13:05.798 valid_lft forever preferred_lft forever 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:05.798 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:05.799 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:05.799 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:05.799 altname enp217s0f1np1 00:13:05.799 altname ens818f1np1 00:13:05.799 inet 192.168.100.9/24 scope global mlx_0_1 00:13:05.799 valid_lft forever preferred_lft forever 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:05.799 07:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:05.799 192.168.100.9' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:05.799 192.168.100.9' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:05.799 192.168.100.9' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1548935 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1548935 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1548935 ']' 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.799 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.799 [2024-07-24 07:02:20.173595] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:13:05.799 [2024-07-24 07:02:20.173710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.799 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.799 [2024-07-24 07:02:20.321871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.058 [2024-07-24 07:02:20.534075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.058 [2024-07-24 07:02:20.534122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.058 [2024-07-24 07:02:20.534138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.058 [2024-07-24 07:02:20.534153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.058 [2024-07-24 07:02:20.534164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.058 [2024-07-24 07:02:20.534193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.318 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.318 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:06.318 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.318 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.318 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:13:06.577 Unsupported transport: rdma 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@806 -- # type=--id 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@807 -- # id=0 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:06.577 07:02:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:06.577 nvmf_trace.0 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # return 0 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:06.577 rmmod nvme_rdma 00:13:06.577 rmmod nvme_fabrics 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1548935 ']' 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1548935 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1548935 ']' 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1548935 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1548935 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1548935' 00:13:06.577 killing process with pid 1548935 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1548935 00:13:06.577 07:02:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1548935 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:07.957 00:13:07.957 real 0m10.374s 00:13:07.957 user 0m4.533s 00:13:07.957 sys 0m6.517s 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 ************************************ 00:13:07.957 END TEST nvmf_zcopy 00:13:07.957 ************************************ 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 ************************************ 00:13:07.957 START TEST nvmf_nmic 00:13:07.957 ************************************ 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:07.957 * Looking for test storage... 00:13:07.957 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.957 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.217 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.217 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.217 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.217 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.217 07:02:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:16.343 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:16.343 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:16.343 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:16.343 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:16.343 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:16.344 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:16.344 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:16.344 altname enp217s0f0np0 00:13:16.344 altname ens818f0np0 00:13:16.344 inet 192.168.100.8/24 scope global mlx_0_0 00:13:16.344 valid_lft forever preferred_lft forever 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:16.344 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:16.344 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:16.344 altname enp217s0f1np1 00:13:16.344 altname ens818f1np1 00:13:16.344 inet 192.168.100.9/24 scope global mlx_0_1 00:13:16.344 valid_lft forever preferred_lft forever 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:16.344 192.168.100.9' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:16.344 192.168.100.9' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:16.344 192.168.100.9' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1553375 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1553375 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1553375 ']' 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.344 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.345 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.345 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.345 07:02:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.604 [2024-07-24 07:02:30.982143] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:13:16.604 [2024-07-24 07:02:30.982235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.604 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.604 [2024-07-24 07:02:31.130352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.864 [2024-07-24 07:02:31.338883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.864 [2024-07-24 07:02:31.338932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.864 [2024-07-24 07:02:31.338948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.864 [2024-07-24 07:02:31.338959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.864 [2024-07-24 07:02:31.338971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.864 [2024-07-24 07:02:31.339051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.864 [2024-07-24 07:02:31.339126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.864 [2024-07-24 07:02:31.339185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.864 [2024-07-24 07:02:31.339196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.432 07:02:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.432 07:02:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:17.432 07:02:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.432 07:02:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:17.432 07:02:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.432 07:02:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.432 07:02:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:17.432 07:02:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.432 07:02:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.432 [2024-07-24 07:02:31.837139] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f50e7c8a940) succeed. 00:13:17.432 [2024-07-24 07:02:31.846433] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f50e7c46940) succeed. 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 Malloc0 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 [2024-07-24 07:02:32.280726] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:17.691 test case1: single bdev can't be used in multiple subsystems 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 [2024-07-24 07:02:32.304454] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:17.691 [2024-07-24 07:02:32.304488] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:17.691 [2024-07-24 07:02:32.304502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.691 request: 00:13:17.691 { 00:13:17.691 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:17.691 "namespace": { 00:13:17.691 "bdev_name": "Malloc0", 00:13:17.691 "no_auto_visible": false 00:13:17.691 }, 00:13:17.691 "method": "nvmf_subsystem_add_ns", 00:13:17.691 "req_id": 1 00:13:17.691 } 00:13:17.691 Got JSON-RPC error response 00:13:17.691 response: 00:13:17.691 { 00:13:17.691 "code": -32602, 00:13:17.691 "message": "Invalid parameters" 00:13:17.691 } 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:17.691 Adding namespace failed - expected result. 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:17.691 test case2: host connect to nvmf target in multiple paths 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.691 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 [2024-07-24 07:02:32.320549] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:13:17.950 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.950 07:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:18.932 07:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:13:19.866 07:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.866 07:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:13:19.866 07:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.866 07:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:19.866 07:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:13:21.772 07:02:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:21.772 07:02:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:21.772 07:02:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.772 07:02:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:21.772 07:02:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.772 07:02:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:13:21.773 07:02:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:21.773 [global] 00:13:21.773 thread=1 00:13:21.773 invalidate=1 00:13:21.773 rw=write 00:13:21.773 time_based=1 00:13:21.773 runtime=1 00:13:21.773 ioengine=libaio 00:13:21.773 direct=1 00:13:21.773 bs=4096 00:13:21.773 iodepth=1 00:13:21.773 norandommap=0 00:13:21.773 numjobs=1 00:13:21.773 00:13:21.773 verify_dump=1 00:13:21.773 verify_backlog=512 00:13:21.773 verify_state_save=0 00:13:21.773 do_verify=1 00:13:21.773 verify=crc32c-intel 00:13:21.773 [job0] 00:13:21.773 filename=/dev/nvme0n1 00:13:21.773 Could not set queue depth (nvme0n1) 00:13:22.338 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:22.338 fio-3.35 00:13:22.338 Starting 1 thread 00:13:23.277 00:13:23.277 job0: (groupid=0, jobs=1): err= 0: pid=1554609: Wed Jul 24 07:02:37 2024 00:13:23.277 read: IOPS=6602, BW=25.8MiB/s (27.0MB/s)(25.8MiB/1001msec) 00:13:23.277 slat (nsec): min=8112, max=28232, avg=8797.67, stdev=775.96 00:13:23.277 clat (nsec): min=50639, max=92417, avg=64800.83, stdev=4129.94 00:13:23.277 lat (usec): min=63, max=101, avg=73.60, stdev= 4.17 00:13:23.277 clat percentiles (nsec): 00:13:23.277 | 1.00th=[56576], 5.00th=[58624], 10.00th=[59648], 20.00th=[61184], 00:13:23.277 | 30.00th=[62208], 40.00th=[63744], 50.00th=[64768], 60.00th=[66048], 00:13:23.277 | 70.00th=[67072], 80.00th=[68096], 90.00th=[70144], 95.00th=[72192], 00:13:23.277 | 99.00th=[75264], 99.50th=[77312], 99.90th=[80384], 99.95th=[83456], 00:13:23.277 | 99.99th=[92672] 00:13:23.277 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:13:23.277 slat (nsec): min=10008, max=45126, avg=10618.71, stdev=1129.55 00:13:23.277 clat (usec): min=43, max=128, avg=62.78, stdev= 4.42 00:13:23.277 lat (usec): min=62, max=173, avg=73.40, stdev= 4.55 00:13:23.277 clat percentiles (usec): 00:13:23.277 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 60], 00:13:23.277 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:13:23.277 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 71], 00:13:23.277 | 99.00th=[ 74], 99.50th=[ 76], 99.90th=[ 80], 99.95th=[ 91], 00:13:23.277 | 99.99th=[ 129] 00:13:23.277 bw ( KiB/s): min=28512, max=28512, per=100.00%, avg=28512.00, stdev= 0.00, samples=1 00:13:23.277 iops : min= 7128, max= 7128, avg=7128.00, stdev= 0.00, samples=1 00:13:23.277 lat (usec) : 50=0.04%, 100=99.95%, 250=0.01% 00:13:23.277 cpu : usr=7.60%, sys=18.80%, ctx=13266, majf=0, minf=2 00:13:23.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.277 issued rwts: total=6609,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.277 00:13:23.277 Run status group 0 (all jobs): 00:13:23.277 READ: bw=25.8MiB/s (27.0MB/s), 25.8MiB/s-25.8MiB/s (27.0MB/s-27.0MB/s), io=25.8MiB (27.1MB), run=1001-1001msec 00:13:23.277 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:13:23.277 00:13:23.277 Disk stats (read/write): 00:13:23.277 nvme0n1: ios=5827/6144, merge=0/0, ticks=346/330, in_queue=676, util=90.68% 00:13:23.277 07:02:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:25.184 rmmod nvme_rdma 00:13:25.184 rmmod nvme_fabrics 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1553375 ']' 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1553375 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1553375 ']' 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1553375 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.184 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1553375 00:13:25.444 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:25.444 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:25.444 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1553375' 00:13:25.444 killing process with pid 1553375 00:13:25.444 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1553375 00:13:25.444 07:02:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1553375 00:13:27.350 07:02:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:27.350 07:02:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:27.350 00:13:27.350 real 0m19.419s 00:13:27.350 user 0m50.482s 00:13:27.350 sys 0m7.445s 00:13:27.350 07:02:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.350 07:02:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.350 ************************************ 00:13:27.350 END TEST nvmf_nmic 00:13:27.350 ************************************ 00:13:27.350 07:02:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:27.350 07:02:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.350 07:02:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.350 07:02:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:27.350 ************************************ 00:13:27.350 START TEST nvmf_fio_target 00:13:27.350 ************************************ 00:13:27.350 07:02:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:27.610 * Looking for test storage... 00:13:27.610 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:27.610 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.611 07:02:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:35.737 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:35.737 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:35.737 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:35.737 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:35.737 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:35.738 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:35.738 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:35.738 altname enp217s0f0np0 00:13:35.738 altname ens818f0np0 00:13:35.738 inet 192.168.100.8/24 scope global mlx_0_0 00:13:35.738 valid_lft forever preferred_lft forever 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:35.738 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:35.738 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:35.738 altname enp217s0f1np1 00:13:35.738 altname ens818f1np1 00:13:35.738 inet 192.168.100.9/24 scope global mlx_0_1 00:13:35.738 valid_lft forever preferred_lft forever 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:35.738 192.168.100.9' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:35.738 192.168.100.9' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:35.738 192.168.100.9' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1559428 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1559428 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1559428 ']' 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.738 07:02:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.997 [2024-07-24 07:02:50.389266] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:13:35.997 [2024-07-24 07:02:50.389356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.997 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.997 [2024-07-24 07:02:50.538801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.255 [2024-07-24 07:02:50.749542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.255 [2024-07-24 07:02:50.749591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.255 [2024-07-24 07:02:50.749605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.255 [2024-07-24 07:02:50.749616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.255 [2024-07-24 07:02:50.749631] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.255 [2024-07-24 07:02:50.749759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.255 [2024-07-24 07:02:50.749865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.255 [2024-07-24 07:02:50.749924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.255 [2024-07-24 07:02:50.749936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.824 07:02:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.824 07:02:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:36.824 07:02:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.824 07:02:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.824 07:02:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.824 07:02:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.824 07:02:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:36.824 [2024-07-24 07:02:51.405545] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f1c4d5b4940) succeed. 00:13:36.824 [2024-07-24 07:02:51.415291] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f1c4d56d940) succeed. 00:13:37.392 07:02:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:37.651 07:02:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:37.651 07:02:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:37.909 07:02:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:37.909 07:02:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.168 07:02:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:38.168 07:02:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.425 07:02:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:38.426 07:02:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:38.426 07:02:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.684 07:02:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:38.684 07:02:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.942 07:02:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:38.942 07:02:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.200 07:02:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:39.200 07:02:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:39.458 07:02:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:39.717 07:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:39.717 07:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:39.975 07:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:39.975 07:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.975 07:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:40.234 [2024-07-24 07:02:54.698131] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:40.234 07:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:40.524 07:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:40.524 07:02:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:41.461 07:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:41.461 07:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:13:41.461 07:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.461 07:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:13:41.461 07:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:13:41.461 07:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:13:43.994 07:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:43.994 07:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:43.994 07:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.994 07:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:13:43.994 07:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.994 07:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:13:43.994 07:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:43.994 [global] 00:13:43.994 thread=1 00:13:43.994 invalidate=1 00:13:43.994 rw=write 00:13:43.994 time_based=1 00:13:43.994 runtime=1 00:13:43.994 ioengine=libaio 00:13:43.994 direct=1 00:13:43.994 bs=4096 00:13:43.994 iodepth=1 00:13:43.994 norandommap=0 00:13:43.994 numjobs=1 00:13:43.994 00:13:43.994 verify_dump=1 00:13:43.994 verify_backlog=512 00:13:43.994 verify_state_save=0 00:13:43.994 do_verify=1 00:13:43.994 verify=crc32c-intel 00:13:43.994 [job0] 00:13:43.994 filename=/dev/nvme0n1 00:13:43.994 [job1] 00:13:43.994 filename=/dev/nvme0n2 00:13:43.994 [job2] 00:13:43.994 filename=/dev/nvme0n3 00:13:43.994 [job3] 00:13:43.994 filename=/dev/nvme0n4 00:13:43.994 Could not set queue depth (nvme0n1) 00:13:43.994 Could not set queue depth (nvme0n2) 00:13:43.994 Could not set queue depth (nvme0n3) 00:13:43.994 Could not set queue depth (nvme0n4) 00:13:43.994 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:43.994 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:43.994 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:43.994 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:43.994 fio-3.35 00:13:43.994 Starting 4 threads 00:13:45.372 00:13:45.372 job0: (groupid=0, jobs=1): err= 0: pid=1561113: Wed Jul 24 07:02:59 2024 00:13:45.372 read: IOPS=3940, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1000msec) 00:13:45.372 slat (nsec): min=8043, max=30289, avg=8834.41, stdev=912.53 00:13:45.372 clat (usec): min=72, max=291, avg=116.22, stdev=26.39 00:13:45.372 lat (usec): min=81, max=299, avg=125.05, stdev=26.34 00:13:45.372 clat percentiles (usec): 00:13:45.372 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 85], 00:13:45.372 | 30.00th=[ 90], 40.00th=[ 118], 50.00th=[ 127], 60.00th=[ 131], 00:13:45.372 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 149], 00:13:45.372 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 198], 99.95th=[ 198], 00:13:45.372 | 99.99th=[ 293] 00:13:45.372 write: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec); 0 zone resets 00:13:45.372 slat (nsec): min=9976, max=38906, avg=11002.38, stdev=1060.37 00:13:45.372 clat (usec): min=67, max=199, avg=108.38, stdev=26.26 00:13:45.372 lat (usec): min=77, max=210, avg=119.38, stdev=26.26 00:13:45.372 clat percentiles (usec): 00:13:45.372 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 81], 00:13:45.372 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 117], 60.00th=[ 125], 00:13:45.372 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:13:45.372 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 198], 00:13:45.372 | 99.99th=[ 200] 00:13:45.372 bw ( KiB/s): min=17824, max=17824, per=25.56%, avg=17824.00, stdev= 0.00, samples=1 00:13:45.372 iops : min= 4456, max= 4456, avg=4456.00, stdev= 0.00, samples=1 00:13:45.372 lat (usec) : 100=41.77%, 250=58.21%, 500=0.01% 00:13:45.372 cpu : usr=6.70%, sys=9.70%, ctx=8036, majf=0, minf=1 00:13:45.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.372 issued rwts: total=3940,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.372 job1: (groupid=0, jobs=1): err= 0: pid=1561130: Wed Jul 24 07:02:59 2024 00:13:45.372 read: IOPS=4797, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1001msec) 00:13:45.372 slat (nsec): min=7991, max=31096, avg=8792.31, stdev=847.39 00:13:45.372 clat (usec): min=70, max=197, avg=89.23, stdev=17.36 00:13:45.372 lat (usec): min=79, max=206, avg=98.03, stdev=17.42 00:13:45.372 clat percentiles (usec): 00:13:45.372 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 80], 00:13:45.372 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:13:45.372 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 124], 95.00th=[ 135], 00:13:45.372 | 99.00th=[ 147], 99.50th=[ 151], 99.90th=[ 184], 99.95th=[ 192], 00:13:45.372 | 99.99th=[ 198] 00:13:45.372 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:13:45.372 slat (nsec): min=9993, max=68298, avg=10781.21, stdev=1370.82 00:13:45.372 clat (usec): min=68, max=190, avg=88.77, stdev=19.65 00:13:45.372 lat (usec): min=78, max=202, avg=99.55, stdev=19.88 00:13:45.372 clat percentiles (usec): 00:13:45.372 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:13:45.372 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 83], 00:13:45.372 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 125], 95.00th=[ 131], 00:13:45.372 | 99.00th=[ 145], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 184], 00:13:45.372 | 99.99th=[ 192] 00:13:45.372 bw ( KiB/s): min=20200, max=20200, per=28.97%, avg=20200.00, stdev= 0.00, samples=1 00:13:45.372 iops : min= 5050, max= 5050, avg=5050.00, stdev= 0.00, samples=1 00:13:45.372 lat (usec) : 100=84.03%, 250=15.97% 00:13:45.372 cpu : usr=7.70%, sys=12.40%, ctx=9922, majf=0, minf=1 00:13:45.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.372 issued rwts: total=4802,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.372 job2: (groupid=0, jobs=1): err= 0: pid=1561131: Wed Jul 24 07:02:59 2024 00:13:45.372 read: IOPS=4475, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1001msec) 00:13:45.372 slat (nsec): min=8208, max=19312, avg=8961.17, stdev=665.14 00:13:45.372 clat (usec): min=84, max=193, avg=100.41, stdev= 7.46 00:13:45.372 lat (usec): min=93, max=202, avg=109.37, stdev= 7.49 00:13:45.372 clat percentiles (usec): 00:13:45.372 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 95], 00:13:45.372 | 30.00th=[ 97], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 101], 00:13:45.372 | 70.00th=[ 103], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 114], 00:13:45.372 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 153], 99.95th=[ 155], 00:13:45.372 | 99.99th=[ 194] 00:13:45.372 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:13:45.372 slat (nsec): min=10254, max=40760, avg=11033.30, stdev=1150.73 00:13:45.372 clat (usec): min=76, max=148, avg=95.70, stdev= 6.89 00:13:45.372 lat (usec): min=90, max=189, avg=106.73, stdev= 6.99 00:13:45.372 clat percentiles (usec): 00:13:45.372 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 91], 00:13:45.372 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 95], 60.00th=[ 97], 00:13:45.372 | 70.00th=[ 99], 80.00th=[ 101], 90.00th=[ 105], 95.00th=[ 109], 00:13:45.372 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 135], 00:13:45.372 | 99.99th=[ 149] 00:13:45.372 bw ( KiB/s): min=20040, max=20040, per=28.74%, avg=20040.00, stdev= 0.00, samples=1 00:13:45.372 iops : min= 5010, max= 5010, avg=5010.00, stdev= 0.00, samples=1 00:13:45.372 lat (usec) : 100=65.50%, 250=34.50% 00:13:45.372 cpu : usr=6.00%, sys=12.70%, ctx=9088, majf=0, minf=1 00:13:45.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.372 issued rwts: total=4480,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.372 job3: (groupid=0, jobs=1): err= 0: pid=1561132: Wed Jul 24 07:02:59 2024 00:13:45.372 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:13:45.372 slat (nsec): min=8352, max=27761, avg=8967.14, stdev=772.58 00:13:45.372 clat (usec): min=82, max=185, avg=128.50, stdev=17.01 00:13:45.372 lat (usec): min=92, max=194, avg=137.46, stdev=16.99 00:13:45.372 clat percentiles (usec): 00:13:45.372 | 1.00th=[ 87], 5.00th=[ 93], 10.00th=[ 98], 20.00th=[ 121], 00:13:45.372 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:13:45.372 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 149], 00:13:45.372 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 182], 99.95th=[ 184], 00:13:45.372 | 99.99th=[ 186] 00:13:45.372 write: IOPS=3621, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1001msec); 0 zone resets 00:13:45.372 slat (nsec): min=10300, max=44337, avg=11147.49, stdev=1216.09 00:13:45.372 clat (usec): min=75, max=187, avg=124.72, stdev=16.06 00:13:45.372 lat (usec): min=86, max=198, avg=135.87, stdev=16.09 00:13:45.372 clat percentiles (usec): 00:13:45.372 | 1.00th=[ 84], 5.00th=[ 90], 10.00th=[ 98], 20.00th=[ 118], 00:13:45.372 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:13:45.372 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:13:45.372 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 184], 99.95th=[ 188], 00:13:45.372 | 99.99th=[ 188] 00:13:45.372 bw ( KiB/s): min=15824, max=15824, per=22.69%, avg=15824.00, stdev= 0.00, samples=1 00:13:45.372 iops : min= 3956, max= 3956, avg=3956.00, stdev= 0.00, samples=1 00:13:45.372 lat (usec) : 100=11.39%, 250=88.61% 00:13:45.372 cpu : usr=5.80%, sys=9.20%, ctx=7210, majf=0, minf=2 00:13:45.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.372 issued rwts: total=3584,3625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.372 00:13:45.372 Run status group 0 (all jobs): 00:13:45.372 READ: bw=65.6MiB/s (68.8MB/s), 14.0MiB/s-18.7MiB/s (14.7MB/s-19.6MB/s), io=65.6MiB (68.8MB), run=1000-1001msec 00:13:45.372 WRITE: bw=68.1MiB/s (71.4MB/s), 14.1MiB/s-20.0MiB/s (14.8MB/s-20.9MB/s), io=68.2MiB (71.5MB), run=1000-1001msec 00:13:45.372 00:13:45.372 Disk stats (read/write): 00:13:45.372 nvme0n1: ios=3121/3513, merge=0/0, ticks=355/367, in_queue=722, util=84.37% 00:13:45.372 nvme0n2: ios=4031/4096, merge=0/0, ticks=338/316, in_queue=654, util=85.28% 00:13:45.372 nvme0n3: ios=3584/3997, merge=0/0, ticks=337/340, in_queue=677, util=88.54% 00:13:45.372 nvme0n4: ios=2753/3072, merge=0/0, ticks=358/368, in_queue=726, util=89.58% 00:13:45.372 07:02:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:45.372 [global] 00:13:45.372 thread=1 00:13:45.372 invalidate=1 00:13:45.372 rw=randwrite 00:13:45.372 time_based=1 00:13:45.372 runtime=1 00:13:45.372 ioengine=libaio 00:13:45.372 direct=1 00:13:45.372 bs=4096 00:13:45.372 iodepth=1 00:13:45.373 norandommap=0 00:13:45.373 numjobs=1 00:13:45.373 00:13:45.373 verify_dump=1 00:13:45.373 verify_backlog=512 00:13:45.373 verify_state_save=0 00:13:45.373 do_verify=1 00:13:45.373 verify=crc32c-intel 00:13:45.373 [job0] 00:13:45.373 filename=/dev/nvme0n1 00:13:45.373 [job1] 00:13:45.373 filename=/dev/nvme0n2 00:13:45.373 [job2] 00:13:45.373 filename=/dev/nvme0n3 00:13:45.373 [job3] 00:13:45.373 filename=/dev/nvme0n4 00:13:45.373 Could not set queue depth (nvme0n1) 00:13:45.373 Could not set queue depth (nvme0n2) 00:13:45.373 Could not set queue depth (nvme0n3) 00:13:45.373 Could not set queue depth (nvme0n4) 00:13:45.632 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.632 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.632 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.632 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.632 fio-3.35 00:13:45.632 Starting 4 threads 00:13:47.003 00:13:47.003 job0: (groupid=0, jobs=1): err= 0: pid=1561560: Wed Jul 24 07:03:01 2024 00:13:47.003 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:13:47.003 slat (nsec): min=7975, max=26336, avg=8827.85, stdev=819.48 00:13:47.003 clat (usec): min=69, max=233, avg=92.02, stdev=25.77 00:13:47.003 lat (usec): min=78, max=242, avg=100.85, stdev=25.78 00:13:47.003 clat percentiles (usec): 00:13:47.003 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 80], 00:13:47.003 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 85], 00:13:47.003 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 153], 95.00th=[ 163], 00:13:47.003 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 208], 99.95th=[ 212], 00:13:47.003 | 99.99th=[ 235] 00:13:47.003 write: IOPS=5073, BW=19.8MiB/s (20.8MB/s)(19.8MiB/1001msec); 0 zone resets 00:13:47.003 slat (nsec): min=9780, max=66350, avg=10477.19, stdev=1366.82 00:13:47.003 clat (usec): min=66, max=351, avg=91.22, stdev=28.12 00:13:47.003 lat (usec): min=76, max=361, avg=101.70, stdev=28.43 00:13:47.003 clat percentiles (usec): 00:13:47.003 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 76], 00:13:47.003 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 82], 00:13:47.003 | 70.00th=[ 85], 80.00th=[ 90], 90.00th=[ 151], 95.00th=[ 157], 00:13:47.003 | 99.00th=[ 174], 99.50th=[ 188], 99.90th=[ 212], 99.95th=[ 219], 00:13:47.003 | 99.99th=[ 351] 00:13:47.003 bw ( KiB/s): min=18040, max=18040, per=26.51%, avg=18040.00, stdev= 0.00, samples=1 00:13:47.003 iops : min= 4510, max= 4510, avg=4510.00, stdev= 0.00, samples=1 00:13:47.003 lat (usec) : 100=85.85%, 250=14.14%, 500=0.01% 00:13:47.003 cpu : usr=6.60%, sys=12.90%, ctx=9688, majf=0, minf=1 00:13:47.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.003 issued rwts: total=4608,5079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.003 job1: (groupid=0, jobs=1): err= 0: pid=1561571: Wed Jul 24 07:03:01 2024 00:13:47.003 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:13:47.003 slat (nsec): min=8105, max=34588, avg=9802.65, stdev=2311.39 00:13:47.003 clat (usec): min=72, max=229, avg=121.86, stdev=28.96 00:13:47.003 lat (usec): min=81, max=238, avg=131.66, stdev=29.30 00:13:47.003 clat percentiles (usec): 00:13:47.003 | 1.00th=[ 79], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 87], 00:13:47.003 | 30.00th=[ 95], 40.00th=[ 121], 50.00th=[ 128], 60.00th=[ 133], 00:13:47.003 | 70.00th=[ 137], 80.00th=[ 145], 90.00th=[ 161], 95.00th=[ 167], 00:13:47.003 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 210], 99.95th=[ 219], 00:13:47.003 | 99.99th=[ 229] 00:13:47.003 write: IOPS=3750, BW=14.6MiB/s (15.4MB/s)(14.7MiB/1001msec); 0 zone resets 00:13:47.003 slat (nsec): min=9888, max=38213, avg=11738.84, stdev=2667.90 00:13:47.003 clat (usec): min=62, max=226, avg=124.43, stdev=26.14 00:13:47.003 lat (usec): min=79, max=236, avg=136.17, stdev=26.44 00:13:47.003 clat percentiles (usec): 00:13:47.003 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 99], 00:13:47.003 | 30.00th=[ 118], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 133], 00:13:47.003 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:13:47.003 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 210], 99.95th=[ 217], 00:13:47.003 | 99.99th=[ 227] 00:13:47.003 bw ( KiB/s): min=16384, max=16384, per=24.08%, avg=16384.00, stdev= 0.00, samples=1 00:13:47.003 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:47.003 lat (usec) : 100=26.00%, 250=74.00% 00:13:47.003 cpu : usr=5.30%, sys=10.30%, ctx=7338, majf=0, minf=1 00:13:47.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.003 issued rwts: total=3584,3754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.003 job2: (groupid=0, jobs=1): err= 0: pid=1561574: Wed Jul 24 07:03:01 2024 00:13:47.003 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:13:47.003 slat (nsec): min=8195, max=31420, avg=9111.11, stdev=1145.85 00:13:47.003 clat (usec): min=76, max=215, avg=112.10, stdev=24.68 00:13:47.003 lat (usec): min=85, max=224, avg=121.21, stdev=24.66 00:13:47.003 clat percentiles (usec): 00:13:47.003 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 96], 00:13:47.003 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 105], 00:13:47.003 | 70.00th=[ 110], 80.00th=[ 130], 90.00th=[ 159], 95.00th=[ 167], 00:13:47.003 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 204], 99.95th=[ 210], 00:13:47.003 | 99.99th=[ 217] 00:13:47.003 write: IOPS=4110, BW=16.1MiB/s (16.8MB/s)(16.1MiB/1001msec); 0 zone resets 00:13:47.003 slat (nsec): min=10059, max=45688, avg=10908.79, stdev=1598.66 00:13:47.003 clat (usec): min=78, max=216, avg=107.30, stdev=24.41 00:13:47.003 lat (usec): min=88, max=227, avg=118.21, stdev=24.68 00:13:47.003 clat percentiles (usec): 00:13:47.003 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 92], 00:13:47.003 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 98], 60.00th=[ 100], 00:13:47.003 | 70.00th=[ 104], 80.00th=[ 115], 90.00th=[ 153], 95.00th=[ 159], 00:13:47.003 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 208], 99.95th=[ 210], 00:13:47.003 | 99.99th=[ 217] 00:13:47.003 bw ( KiB/s): min=16384, max=16384, per=24.08%, avg=16384.00, stdev= 0.00, samples=1 00:13:47.003 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:47.003 lat (usec) : 100=51.07%, 250=48.93% 00:13:47.003 cpu : usr=5.30%, sys=11.70%, ctx=8212, majf=0, minf=1 00:13:47.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.004 issued rwts: total=4096,4115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.004 job3: (groupid=0, jobs=1): err= 0: pid=1561577: Wed Jul 24 07:03:01 2024 00:13:47.004 read: IOPS=3800, BW=14.8MiB/s (15.6MB/s)(14.9MiB/1002msec) 00:13:47.004 slat (nsec): min=8153, max=33041, avg=9087.25, stdev=1272.32 00:13:47.004 clat (usec): min=72, max=192, avg=116.56, stdev=23.09 00:13:47.004 lat (usec): min=87, max=201, avg=125.65, stdev=23.07 00:13:47.004 clat percentiles (usec): 00:13:47.004 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 92], 00:13:47.004 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 124], 60.00th=[ 130], 00:13:47.004 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:13:47.004 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 194], 00:13:47.004 | 99.99th=[ 194] 00:13:47.004 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:13:47.004 slat (nsec): min=10004, max=43411, avg=10929.33, stdev=1522.40 00:13:47.004 clat (usec): min=71, max=190, avg=112.41, stdev=22.38 00:13:47.004 lat (usec): min=82, max=200, avg=123.34, stdev=22.27 00:13:47.004 clat percentiles (usec): 00:13:47.004 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:13:47.004 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 119], 60.00th=[ 126], 00:13:47.004 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:13:47.004 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 186], 99.95th=[ 186], 00:13:47.004 | 99.99th=[ 190] 00:13:47.004 bw ( KiB/s): min=19184, max=19184, per=28.20%, avg=19184.00, stdev= 0.00, samples=1 00:13:47.004 iops : min= 4796, max= 4796, avg=4796.00, stdev= 0.00, samples=1 00:13:47.004 lat (usec) : 100=40.88%, 250=59.12% 00:13:47.004 cpu : usr=5.00%, sys=10.99%, ctx=7905, majf=0, minf=2 00:13:47.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.004 issued rwts: total=3808,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.004 00:13:47.004 Run status group 0 (all jobs): 00:13:47.004 READ: bw=62.7MiB/s (65.8MB/s), 14.0MiB/s-18.0MiB/s (14.7MB/s-18.9MB/s), io=62.9MiB (65.9MB), run=1001-1002msec 00:13:47.004 WRITE: bw=66.4MiB/s (69.7MB/s), 14.6MiB/s-19.8MiB/s (15.4MB/s-20.8MB/s), io=66.6MiB (69.8MB), run=1001-1002msec 00:13:47.004 00:13:47.004 Disk stats (read/write): 00:13:47.004 nvme0n1: ios=3817/4096, merge=0/0, ticks=337/345, in_queue=682, util=84.37% 00:13:47.004 nvme0n2: ios=2908/3072, merge=0/0, ticks=328/366, in_queue=694, util=85.07% 00:13:47.004 nvme0n3: ios=3226/3584, merge=0/0, ticks=345/335, in_queue=680, util=88.32% 00:13:47.004 nvme0n4: ios=3160/3584, merge=0/0, ticks=336/356, in_queue=692, util=89.46% 00:13:47.004 07:03:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:47.004 [global] 00:13:47.004 thread=1 00:13:47.004 invalidate=1 00:13:47.004 rw=write 00:13:47.004 time_based=1 00:13:47.004 runtime=1 00:13:47.004 ioengine=libaio 00:13:47.004 direct=1 00:13:47.004 bs=4096 00:13:47.004 iodepth=128 00:13:47.004 norandommap=0 00:13:47.004 numjobs=1 00:13:47.004 00:13:47.004 verify_dump=1 00:13:47.004 verify_backlog=512 00:13:47.004 verify_state_save=0 00:13:47.004 do_verify=1 00:13:47.004 verify=crc32c-intel 00:13:47.004 [job0] 00:13:47.004 filename=/dev/nvme0n1 00:13:47.004 [job1] 00:13:47.004 filename=/dev/nvme0n2 00:13:47.004 [job2] 00:13:47.004 filename=/dev/nvme0n3 00:13:47.004 [job3] 00:13:47.004 filename=/dev/nvme0n4 00:13:47.004 Could not set queue depth (nvme0n1) 00:13:47.004 Could not set queue depth (nvme0n2) 00:13:47.004 Could not set queue depth (nvme0n3) 00:13:47.004 Could not set queue depth (nvme0n4) 00:13:47.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.262 fio-3.35 00:13:47.262 Starting 4 threads 00:13:48.656 00:13:48.656 job0: (groupid=0, jobs=1): err= 0: pid=1562078: Wed Jul 24 07:03:02 2024 00:13:48.656 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:13:48.656 slat (usec): min=2, max=3285, avg=103.65, stdev=378.33 00:13:48.656 clat (usec): min=11806, max=17311, avg=13414.13, stdev=652.77 00:13:48.656 lat (usec): min=11865, max=17320, avg=13517.78, stdev=706.63 00:13:48.656 clat percentiles (usec): 00:13:48.656 | 1.00th=[12125], 5.00th=[12518], 10.00th=[12649], 20.00th=[12911], 00:13:48.656 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:13:48.656 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14091], 95.00th=[14746], 00:13:48.656 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16319], 99.95th=[16450], 00:13:48.656 | 99.99th=[17433] 00:13:48.656 write: IOPS=5058, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1003msec); 0 zone resets 00:13:48.656 slat (usec): min=2, max=3225, avg=99.21, stdev=359.27 00:13:48.656 clat (usec): min=2723, max=16520, avg=12831.59, stdev=1021.44 00:13:48.656 lat (usec): min=3617, max=16524, avg=12930.80, stdev=1049.47 00:13:48.656 clat percentiles (usec): 00:13:48.656 | 1.00th=[ 8094], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:13:48.656 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:13:48.656 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[14222], 00:13:48.656 | 99.00th=[15139], 99.50th=[15533], 99.90th=[16319], 99.95th=[16450], 00:13:48.656 | 99.99th=[16581] 00:13:48.656 bw ( KiB/s): min=19096, max=20480, per=18.83%, avg=19788.00, stdev=978.64, samples=2 00:13:48.656 iops : min= 4774, max= 5120, avg=4947.00, stdev=244.66, samples=2 00:13:48.656 lat (msec) : 4=0.19%, 10=0.64%, 20=99.17% 00:13:48.656 cpu : usr=1.80%, sys=4.79%, ctx=813, majf=0, minf=1 00:13:48.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:48.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.656 issued rwts: total=4608,5074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.656 job1: (groupid=0, jobs=1): err= 0: pid=1562090: Wed Jul 24 07:03:02 2024 00:13:48.656 read: IOPS=9188, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1003msec) 00:13:48.656 slat (usec): min=2, max=1139, avg=54.26, stdev=198.71 00:13:48.656 clat (usec): min=3934, max=9571, avg=7099.30, stdev=316.01 00:13:48.657 lat (usec): min=4808, max=9573, avg=7153.57, stdev=298.22 00:13:48.657 clat percentiles (usec): 00:13:48.657 | 1.00th=[ 6128], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 6915], 00:13:48.657 | 30.00th=[ 7046], 40.00th=[ 7111], 50.00th=[ 7177], 60.00th=[ 7177], 00:13:48.657 | 70.00th=[ 7242], 80.00th=[ 7308], 90.00th=[ 7373], 95.00th=[ 7439], 00:13:48.657 | 99.00th=[ 7635], 99.50th=[ 7635], 99.90th=[ 8586], 99.95th=[ 9503], 00:13:48.657 | 99.99th=[ 9634] 00:13:48.657 write: IOPS=9220, BW=36.0MiB/s (37.8MB/s)(36.1MiB/1003msec); 0 zone resets 00:13:48.657 slat (usec): min=2, max=1398, avg=51.08, stdev=184.77 00:13:48.657 clat (usec): min=2101, max=7326, avg=6673.52, stdev=347.31 00:13:48.657 lat (usec): min=2944, max=7906, avg=6724.60, stdev=333.44 00:13:48.657 clat percentiles (usec): 00:13:48.657 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6521], 00:13:48.657 | 30.00th=[ 6652], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6783], 00:13:48.657 | 70.00th=[ 6849], 80.00th=[ 6915], 90.00th=[ 6980], 95.00th=[ 7046], 00:13:48.657 | 99.00th=[ 7111], 99.50th=[ 7177], 99.90th=[ 7308], 99.95th=[ 7308], 00:13:48.657 | 99.99th=[ 7308] 00:13:48.657 bw ( KiB/s): min=36864, max=36864, per=35.08%, avg=36864.00, stdev= 0.00, samples=2 00:13:48.657 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:13:48.657 lat (msec) : 4=0.18%, 10=99.82% 00:13:48.657 cpu : usr=4.19%, sys=5.69%, ctx=1185, majf=0, minf=1 00:13:48.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.657 issued rwts: total=9216,9248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.657 job2: (groupid=0, jobs=1): err= 0: pid=1562095: Wed Jul 24 07:03:02 2024 00:13:48.657 read: IOPS=7739, BW=30.2MiB/s (31.7MB/s)(30.3MiB/1003msec) 00:13:48.657 slat (usec): min=2, max=1424, avg=62.77, stdev=236.58 00:13:48.657 clat (usec): min=2002, max=8807, avg=8153.60, stdev=538.32 00:13:48.657 lat (usec): min=2005, max=8810, avg=8216.38, stdev=486.78 00:13:48.657 clat percentiles (usec): 00:13:48.657 | 1.00th=[ 6652], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8029], 00:13:48.657 | 30.00th=[ 8094], 40.00th=[ 8160], 50.00th=[ 8225], 60.00th=[ 8291], 00:13:48.657 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8586], 00:13:48.657 | 99.00th=[ 8717], 99.50th=[ 8717], 99.90th=[ 8848], 99.95th=[ 8848], 00:13:48.657 | 99.99th=[ 8848] 00:13:48.657 write: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec); 0 zone resets 00:13:48.657 slat (usec): min=2, max=2174, avg=59.30, stdev=222.42 00:13:48.657 clat (usec): min=5809, max=9218, avg=7765.98, stdev=331.72 00:13:48.657 lat (usec): min=5820, max=10056, avg=7825.28, stdev=249.14 00:13:48.657 clat percentiles (usec): 00:13:48.657 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7570], 00:13:48.657 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7832], 60.00th=[ 7898], 00:13:48.657 | 70.00th=[ 7898], 80.00th=[ 7963], 90.00th=[ 8094], 95.00th=[ 8225], 00:13:48.657 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8979], 99.95th=[ 8979], 00:13:48.657 | 99.99th=[ 9241] 00:13:48.657 bw ( KiB/s): min=32416, max=32768, per=31.02%, avg=32592.00, stdev=248.90, samples=2 00:13:48.657 iops : min= 8104, max= 8192, avg=8148.00, stdev=62.23, samples=2 00:13:48.657 lat (msec) : 4=0.29%, 10=99.71% 00:13:48.657 cpu : usr=3.89%, sys=5.59%, ctx=1021, majf=0, minf=1 00:13:48.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.657 issued rwts: total=7763,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.657 job3: (groupid=0, jobs=1): err= 0: pid=1562096: Wed Jul 24 07:03:02 2024 00:13:48.657 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:13:48.657 slat (usec): min=2, max=3838, avg=134.05, stdev=497.18 00:13:48.657 clat (usec): min=13874, max=20840, avg=17406.93, stdev=694.38 00:13:48.657 lat (usec): min=15531, max=21049, avg=17540.98, stdev=630.15 00:13:48.657 clat percentiles (usec): 00:13:48.657 | 1.00th=[15008], 5.00th=[16319], 10.00th=[16712], 20.00th=[16909], 00:13:48.657 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:13:48.657 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:13:48.657 | 99.00th=[19006], 99.50th=[19268], 99.90th=[20579], 99.95th=[20841], 00:13:48.657 | 99.99th=[20841] 00:13:48.657 write: IOPS=3821, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1003msec); 0 zone resets 00:13:48.657 slat (usec): min=2, max=3624, avg=131.70, stdev=478.74 00:13:48.657 clat (usec): min=1780, max=21000, avg=16749.04, stdev=1504.55 00:13:48.657 lat (usec): min=4787, max=21004, avg=16880.74, stdev=1460.03 00:13:48.657 clat percentiles (usec): 00:13:48.657 | 1.00th=[ 9241], 5.00th=[15139], 10.00th=[16057], 20.00th=[16319], 00:13:48.657 | 30.00th=[16581], 40.00th=[16909], 50.00th=[16909], 60.00th=[17171], 00:13:48.657 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17957], 95.00th=[17957], 00:13:48.657 | 99.00th=[18482], 99.50th=[18482], 99.90th=[20055], 99.95th=[21103], 00:13:48.657 | 99.99th=[21103] 00:13:48.657 bw ( KiB/s): min=13264, max=16384, per=14.11%, avg=14824.00, stdev=2206.17, samples=2 00:13:48.657 iops : min= 3316, max= 4096, avg=3706.00, stdev=551.54, samples=2 00:13:48.657 lat (msec) : 2=0.01%, 10=0.86%, 20=98.88%, 50=0.24% 00:13:48.657 cpu : usr=1.50%, sys=3.99%, ctx=688, majf=0, minf=1 00:13:48.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.657 issued rwts: total=3584,3833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.657 00:13:48.657 Run status group 0 (all jobs): 00:13:48.657 READ: bw=98.0MiB/s (103MB/s), 14.0MiB/s-35.9MiB/s (14.6MB/s-37.6MB/s), io=98.3MiB (103MB), run=1003-1003msec 00:13:48.657 WRITE: bw=103MiB/s (108MB/s), 14.9MiB/s-36.0MiB/s (15.7MB/s-37.8MB/s), io=103MiB (108MB), run=1003-1003msec 00:13:48.657 00:13:48.657 Disk stats (read/write): 00:13:48.657 nvme0n1: ios=3973/4096, merge=0/0, ticks=17156/17142, in_queue=34298, util=84.35% 00:13:48.657 nvme0n2: ios=7619/7680, merge=0/0, ticks=26620/24874, in_queue=51494, util=85.17% 00:13:48.657 nvme0n3: ios=6570/6656, merge=0/0, ticks=17492/16655, in_queue=34147, util=88.43% 00:13:48.657 nvme0n4: ios=3070/3072, merge=0/0, ticks=13233/12939, in_queue=26172, util=89.47% 00:13:48.657 07:03:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:48.657 [global] 00:13:48.657 thread=1 00:13:48.657 invalidate=1 00:13:48.657 rw=randwrite 00:13:48.657 time_based=1 00:13:48.657 runtime=1 00:13:48.657 ioengine=libaio 00:13:48.657 direct=1 00:13:48.657 bs=4096 00:13:48.657 iodepth=128 00:13:48.657 norandommap=0 00:13:48.657 numjobs=1 00:13:48.657 00:13:48.657 verify_dump=1 00:13:48.657 verify_backlog=512 00:13:48.657 verify_state_save=0 00:13:48.657 do_verify=1 00:13:48.657 verify=crc32c-intel 00:13:48.657 [job0] 00:13:48.657 filename=/dev/nvme0n1 00:13:48.657 [job1] 00:13:48.657 filename=/dev/nvme0n2 00:13:48.657 [job2] 00:13:48.657 filename=/dev/nvme0n3 00:13:48.657 [job3] 00:13:48.657 filename=/dev/nvme0n4 00:13:48.657 Could not set queue depth (nvme0n1) 00:13:48.657 Could not set queue depth (nvme0n2) 00:13:48.657 Could not set queue depth (nvme0n3) 00:13:48.657 Could not set queue depth (nvme0n4) 00:13:48.917 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.917 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.917 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.917 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.917 fio-3.35 00:13:48.917 Starting 4 threads 00:13:50.327 00:13:50.327 job0: (groupid=0, jobs=1): err= 0: pid=1562528: Wed Jul 24 07:03:04 2024 00:13:50.327 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:13:50.327 slat (usec): min=2, max=1218, avg=119.66, stdev=305.52 00:13:50.327 clat (usec): min=13987, max=17360, avg=15582.96, stdev=533.54 00:13:50.327 lat (usec): min=13992, max=17403, avg=15702.61, stdev=524.38 00:13:50.327 clat percentiles (usec): 00:13:50.327 | 1.00th=[14222], 5.00th=[14615], 10.00th=[14746], 20.00th=[15270], 00:13:50.327 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:13:50.327 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16188], 95.00th=[16450], 00:13:50.327 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:13:50.327 | 99.99th=[17433] 00:13:50.327 write: IOPS=4315, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1004msec); 0 zone resets 00:13:50.327 slat (usec): min=2, max=1519, avg=113.77, stdev=291.06 00:13:50.327 clat (usec): min=2956, max=17981, avg=14575.32, stdev=1128.57 00:13:50.327 lat (usec): min=3929, max=17985, avg=14689.09, stdev=1125.19 00:13:50.327 clat percentiles (usec): 00:13:50.327 | 1.00th=[ 8586], 5.00th=[13698], 10.00th=[13829], 20.00th=[14222], 00:13:50.327 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[14877], 00:13:50.327 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15401], 95.00th=[15533], 00:13:50.328 | 99.00th=[16057], 99.50th=[16319], 99.90th=[17957], 99.95th=[17957], 00:13:50.328 | 99.99th=[17957] 00:13:50.328 bw ( KiB/s): min=16456, max=17192, per=16.26%, avg=16824.00, stdev=520.43, samples=2 00:13:50.328 iops : min= 4114, max= 4298, avg=4206.00, stdev=130.11, samples=2 00:13:50.328 lat (msec) : 4=0.11%, 10=0.61%, 20=99.29% 00:13:50.328 cpu : usr=2.59%, sys=3.89%, ctx=1310, majf=0, minf=1 00:13:50.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:50.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.328 issued rwts: total=4096,4333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.328 job1: (groupid=0, jobs=1): err= 0: pid=1562533: Wed Jul 24 07:03:04 2024 00:13:50.328 read: IOPS=9206, BW=36.0MiB/s (37.7MB/s)(36.0MiB/1001msec) 00:13:50.328 slat (usec): min=2, max=1062, avg=52.64, stdev=186.23 00:13:50.328 clat (usec): min=5646, max=8262, avg=6948.61, stdev=330.22 00:13:50.328 lat (usec): min=5915, max=8448, avg=7001.25, stdev=342.14 00:13:50.328 clat percentiles (usec): 00:13:50.328 | 1.00th=[ 6128], 5.00th=[ 6456], 10.00th=[ 6587], 20.00th=[ 6718], 00:13:50.328 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 6980], 00:13:50.328 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7570], 00:13:50.328 | 99.00th=[ 7832], 99.50th=[ 7898], 99.90th=[ 8160], 99.95th=[ 8225], 00:13:50.328 | 99.99th=[ 8291] 00:13:50.328 write: IOPS=9552, BW=37.3MiB/s (39.1MB/s)(37.4MiB/1001msec); 0 zone resets 00:13:50.328 slat (usec): min=2, max=1279, avg=49.95, stdev=174.64 00:13:50.328 clat (usec): min=466, max=8172, avg=6557.56, stdev=470.72 00:13:50.328 lat (usec): min=1222, max=8175, avg=6607.51, stdev=475.57 00:13:50.328 clat percentiles (usec): 00:13:50.328 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 6194], 20.00th=[ 6325], 00:13:50.328 | 30.00th=[ 6456], 40.00th=[ 6521], 50.00th=[ 6521], 60.00th=[ 6587], 00:13:50.328 | 70.00th=[ 6718], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7177], 00:13:50.328 | 99.00th=[ 7439], 99.50th=[ 7504], 99.90th=[ 7635], 99.95th=[ 7832], 00:13:50.328 | 99.99th=[ 8160] 00:13:50.328 bw ( KiB/s): min=37264, max=37264, per=36.00%, avg=37264.00, stdev= 0.00, samples=1 00:13:50.328 iops : min= 9316, max= 9316, avg=9316.00, stdev= 0.00, samples=1 00:13:50.328 lat (usec) : 500=0.01% 00:13:50.328 lat (msec) : 2=0.09%, 4=0.26%, 10=99.65% 00:13:50.328 cpu : usr=5.00%, sys=7.70%, ctx=1279, majf=0, minf=1 00:13:50.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:50.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.328 issued rwts: total=9216,9562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.328 job2: (groupid=0, jobs=1): err= 0: pid=1562534: Wed Jul 24 07:03:04 2024 00:13:50.328 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:13:50.328 slat (usec): min=2, max=1234, avg=64.42, stdev=235.58 00:13:50.328 clat (usec): min=6972, max=10092, avg=8487.37, stdev=315.09 00:13:50.328 lat (usec): min=7218, max=10096, avg=8551.79, stdev=240.57 00:13:50.328 clat percentiles (usec): 00:13:50.328 | 1.00th=[ 7373], 5.00th=[ 7767], 10.00th=[ 8160], 20.00th=[ 8356], 00:13:50.328 | 30.00th=[ 8455], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8586], 00:13:50.328 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 8848], 95.00th=[ 8848], 00:13:50.328 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10028], 99.95th=[10028], 00:13:50.328 | 99.99th=[10028] 00:13:50.328 write: IOPS=7751, BW=30.3MiB/s (31.8MB/s)(30.4MiB/1003msec); 0 zone resets 00:13:50.328 slat (usec): min=2, max=1504, avg=61.03, stdev=220.99 00:13:50.328 clat (usec): min=2183, max=8646, avg=7967.07, stdev=480.26 00:13:50.328 lat (usec): min=3086, max=9449, avg=8028.10, stdev=441.96 00:13:50.328 clat percentiles (usec): 00:13:50.328 | 1.00th=[ 6194], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 7832], 00:13:50.328 | 30.00th=[ 7963], 40.00th=[ 8029], 50.00th=[ 8094], 60.00th=[ 8160], 00:13:50.328 | 70.00th=[ 8160], 80.00th=[ 8225], 90.00th=[ 8291], 95.00th=[ 8356], 00:13:50.328 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8586], 99.95th=[ 8586], 00:13:50.328 | 99.99th=[ 8586] 00:13:50.328 bw ( KiB/s): min=29112, max=32328, per=29.68%, avg=30720.00, stdev=2274.06, samples=2 00:13:50.328 iops : min= 7278, max= 8082, avg=7680.00, stdev=568.51, samples=2 00:13:50.328 lat (msec) : 4=0.14%, 10=99.80%, 20=0.06% 00:13:50.328 cpu : usr=3.69%, sys=6.89%, ctx=1007, majf=0, minf=1 00:13:50.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:50.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.328 issued rwts: total=7680,7775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.328 job3: (groupid=0, jobs=1): err= 0: pid=1562535: Wed Jul 24 07:03:04 2024 00:13:50.328 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:13:50.328 slat (usec): min=2, max=1214, avg=120.61, stdev=306.92 00:13:50.328 clat (usec): min=13310, max=17342, avg=15576.95, stdev=512.64 00:13:50.328 lat (usec): min=13888, max=17470, avg=15697.56, stdev=500.69 00:13:50.328 clat percentiles (usec): 00:13:50.328 | 1.00th=[14353], 5.00th=[14615], 10.00th=[14877], 20.00th=[15139], 00:13:50.328 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:13:50.328 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16188], 95.00th=[16319], 00:13:50.328 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:13:50.328 | 99.99th=[17433] 00:13:50.328 write: IOPS=4290, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1004msec); 0 zone resets 00:13:50.328 slat (usec): min=2, max=2510, avg=113.37, stdev=289.95 00:13:50.328 clat (usec): min=2971, max=17986, avg=14637.04, stdev=1127.93 00:13:50.328 lat (usec): min=3947, max=17990, avg=14750.41, stdev=1123.56 00:13:50.328 clat percentiles (usec): 00:13:50.328 | 1.00th=[ 8586], 5.00th=[13698], 10.00th=[13960], 20.00th=[14353], 00:13:50.328 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[14877], 00:13:50.328 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15401], 95.00th=[15664], 00:13:50.328 | 99.00th=[16188], 99.50th=[16450], 99.90th=[17957], 99.95th=[17957], 00:13:50.328 | 99.99th=[17957] 00:13:50.328 bw ( KiB/s): min=16384, max=17064, per=16.16%, avg=16724.00, stdev=480.83, samples=2 00:13:50.328 iops : min= 4096, max= 4266, avg=4181.00, stdev=120.21, samples=2 00:13:50.328 lat (msec) : 4=0.10%, 10=0.58%, 20=99.32% 00:13:50.328 cpu : usr=2.59%, sys=4.09%, ctx=1297, majf=0, minf=1 00:13:50.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:50.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.328 issued rwts: total=4096,4308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.328 00:13:50.328 Run status group 0 (all jobs): 00:13:50.328 READ: bw=97.6MiB/s (102MB/s), 15.9MiB/s-36.0MiB/s (16.7MB/s-37.7MB/s), io=98.0MiB (103MB), run=1001-1004msec 00:13:50.328 WRITE: bw=101MiB/s (106MB/s), 16.8MiB/s-37.3MiB/s (17.6MB/s-39.1MB/s), io=101MiB (106MB), run=1001-1004msec 00:13:50.328 00:13:50.328 Disk stats (read/write): 00:13:50.328 nvme0n1: ios=3444/3584, merge=0/0, ticks=17349/17294, in_queue=34643, util=84.25% 00:13:50.328 nvme0n2: ios=7680/7917, merge=0/0, ticks=13150/12675, in_queue=25825, util=85.19% 00:13:50.328 nvme0n3: ios=6185/6656, merge=0/0, ticks=25689/25611, in_queue=51300, util=88.44% 00:13:50.328 nvme0n4: ios=3369/3584, merge=0/0, ticks=17242/17357, in_queue=34599, util=89.48% 00:13:50.328 07:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:50.328 07:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1562637 00:13:50.328 07:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:50.328 07:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:50.328 [global] 00:13:50.328 thread=1 00:13:50.328 invalidate=1 00:13:50.328 rw=read 00:13:50.328 time_based=1 00:13:50.328 runtime=10 00:13:50.328 ioengine=libaio 00:13:50.328 direct=1 00:13:50.328 bs=4096 00:13:50.328 iodepth=1 00:13:50.328 norandommap=1 00:13:50.328 numjobs=1 00:13:50.328 00:13:50.328 [job0] 00:13:50.328 filename=/dev/nvme0n1 00:13:50.328 [job1] 00:13:50.328 filename=/dev/nvme0n2 00:13:50.328 [job2] 00:13:50.328 filename=/dev/nvme0n3 00:13:50.328 [job3] 00:13:50.328 filename=/dev/nvme0n4 00:13:50.328 Could not set queue depth (nvme0n1) 00:13:50.328 Could not set queue depth (nvme0n2) 00:13:50.328 Could not set queue depth (nvme0n3) 00:13:50.328 Could not set queue depth (nvme0n4) 00:13:50.591 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.591 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.591 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.591 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.591 fio-3.35 00:13:50.591 Starting 4 threads 00:13:53.116 07:03:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:53.374 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=85929984, buflen=4096 00:13:53.374 fio: pid=1562961, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:53.374 07:03:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:53.374 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=109346816, buflen=4096 00:13:53.374 fio: pid=1562960, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:53.632 07:03:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:53.632 07:03:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:53.632 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=54566912, buflen=4096 00:13:53.632 fio: pid=1562957, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:53.889 07:03:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:53.889 07:03:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:54.147 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=55066624, buflen=4096 00:13:54.147 fio: pid=1562959, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:54.147 00:13:54.147 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1562957: Wed Jul 24 07:03:08 2024 00:13:54.147 read: IOPS=9794, BW=38.3MiB/s (40.1MB/s)(116MiB/3033msec) 00:13:54.147 slat (usec): min=7, max=20175, avg=10.84, stdev=181.39 00:13:54.147 clat (usec): min=54, max=518, avg=89.46, stdev= 8.58 00:13:54.147 lat (usec): min=63, max=20309, avg=100.30, stdev=181.84 00:13:54.147 clat percentiles (usec): 00:13:54.147 | 1.00th=[ 65], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 85], 00:13:54.147 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:13:54.147 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 103], 00:13:54.147 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 133], 99.95th=[ 139], 00:13:54.147 | 99.99th=[ 241] 00:13:54.147 bw ( KiB/s): min=39848, max=40152, per=31.71%, avg=39963.20, stdev=123.50, samples=5 00:13:54.147 iops : min= 9962, max=10038, avg=9990.80, stdev=30.87, samples=5 00:13:54.147 lat (usec) : 100=92.07%, 250=7.92%, 500=0.01%, 750=0.01% 00:13:54.147 cpu : usr=3.66%, sys=14.18%, ctx=29712, majf=0, minf=1 00:13:54.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.147 issued rwts: total=29707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.147 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1562959: Wed Jul 24 07:03:08 2024 00:13:54.147 read: IOPS=8765, BW=34.2MiB/s (35.9MB/s)(117MiB/3403msec) 00:13:54.147 slat (usec): min=6, max=25794, avg=11.42, stdev=207.32 00:13:54.147 clat (usec): min=46, max=25043, avg=100.98, stdev=189.89 00:13:54.147 lat (usec): min=61, max=25891, avg=112.40, stdev=281.12 00:13:54.147 clat percentiles (usec): 00:13:54.147 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 79], 00:13:54.147 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 91], 00:13:54.147 | 70.00th=[ 120], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 149], 00:13:54.147 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 202], 99.95th=[ 206], 00:13:54.147 | 99.99th=[ 1029] 00:13:54.147 bw ( KiB/s): min=26760, max=41984, per=26.69%, avg=33637.17, stdev=7112.75, samples=6 00:13:54.147 iops : min= 6690, max=10496, avg=8409.17, stdev=1778.28, samples=6 00:13:54.147 lat (usec) : 50=0.01%, 100=67.27%, 250=32.71% 00:13:54.147 lat (msec) : 2=0.01%, 50=0.01% 00:13:54.147 cpu : usr=2.88%, sys=13.11%, ctx=29836, majf=0, minf=1 00:13:54.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.147 issued rwts: total=29829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.147 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1562960: Wed Jul 24 07:03:08 2024 00:13:54.147 read: IOPS=9413, BW=36.8MiB/s (38.6MB/s)(104MiB/2836msec) 00:13:54.147 slat (usec): min=8, max=12894, avg= 9.90, stdev=94.95 00:13:54.147 clat (usec): min=63, max=193, avg=94.52, stdev= 7.93 00:13:54.147 lat (usec): min=72, max=13005, avg=104.42, stdev=95.42 00:13:54.147 clat percentiles (usec): 00:13:54.147 | 1.00th=[ 83], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 89], 00:13:54.147 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 95], 00:13:54.147 | 70.00th=[ 97], 80.00th=[ 99], 90.00th=[ 103], 95.00th=[ 109], 00:13:54.147 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 143], 99.95th=[ 147], 00:13:54.147 | 99.99th=[ 159] 00:13:54.147 bw ( KiB/s): min=37984, max=38984, per=30.55%, avg=38504.00, stdev=425.69, samples=5 00:13:54.147 iops : min= 9496, max= 9746, avg=9626.00, stdev=106.42, samples=5 00:13:54.147 lat (usec) : 100=83.89%, 250=16.11% 00:13:54.147 cpu : usr=3.88%, sys=13.51%, ctx=26701, majf=0, minf=1 00:13:54.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.147 issued rwts: total=26697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.147 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1562961: Wed Jul 24 07:03:08 2024 00:13:54.147 read: IOPS=7934, BW=31.0MiB/s (32.5MB/s)(81.9MiB/2644msec) 00:13:54.147 slat (nsec): min=8151, max=36528, avg=9039.63, stdev=1031.17 00:13:54.147 clat (usec): min=76, max=332, avg=114.63, stdev=23.53 00:13:54.147 lat (usec): min=90, max=340, avg=123.67, stdev=23.59 00:13:54.147 clat percentiles (usec): 00:13:54.147 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 93], 00:13:54.147 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 103], 60.00th=[ 124], 00:13:54.147 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 151], 00:13:54.147 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 196], 00:13:54.147 | 99.99th=[ 210] 00:13:54.147 bw ( KiB/s): min=26760, max=37728, per=25.67%, avg=32350.40, stdev=5443.59, samples=5 00:13:54.147 iops : min= 6690, max= 9432, avg=8087.60, stdev=1360.90, samples=5 00:13:54.147 lat (usec) : 100=44.73%, 250=55.26%, 500=0.01% 00:13:54.147 cpu : usr=3.44%, sys=11.54%, ctx=20981, majf=0, minf=2 00:13:54.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.147 issued rwts: total=20980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.147 00:13:54.147 Run status group 0 (all jobs): 00:13:54.147 READ: bw=123MiB/s (129MB/s), 31.0MiB/s-38.3MiB/s (32.5MB/s-40.1MB/s), io=419MiB (439MB), run=2644-3403msec 00:13:54.147 00:13:54.147 Disk stats (read/write): 00:13:54.148 nvme0n1: ios=28099/0, merge=0/0, ticks=2331/0, in_queue=2331, util=93.79% 00:13:54.148 nvme0n2: ios=29208/0, merge=0/0, ticks=2728/0, in_queue=2728, util=93.76% 00:13:54.148 nvme0n3: ios=26697/0, merge=0/0, ticks=2312/0, in_queue=2312, util=95.31% 00:13:54.148 nvme0n4: ios=20756/0, merge=0/0, ticks=2165/0, in_queue=2165, util=96.38% 00:13:54.148 07:03:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.148 07:03:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:54.713 07:03:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.713 07:03:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:54.971 07:03:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.971 07:03:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:55.536 07:03:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.536 07:03:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:55.793 07:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.793 07:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:56.051 07:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:56.051 07:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1562637 00:13:56.051 07:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:56.051 07:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:56.982 nvmf hotplug test: fio failed as expected 00:13:56.982 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:57.239 rmmod nvme_rdma 00:13:57.239 rmmod nvme_fabrics 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1559428 ']' 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1559428 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1559428 ']' 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1559428 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1559428 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1559428' 00:13:57.239 killing process with pid 1559428 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1559428 00:13:57.239 07:03:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1559428 00:13:59.170 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.170 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:59.170 00:13:59.170 real 0m31.796s 00:13:59.170 user 2m17.783s 00:13:59.170 sys 0m11.984s 00:13:59.170 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.170 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.170 ************************************ 00:13:59.170 END TEST nvmf_fio_target 00:13:59.170 ************************************ 00:13:59.170 07:03:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:13:59.170 07:03:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:59.170 07:03:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.170 07:03:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:59.427 ************************************ 00:13:59.427 START TEST nvmf_bdevio 00:13:59.427 ************************************ 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:13:59.427 * Looking for test storage... 00:13:59.427 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.427 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.428 07:03:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.537 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:07.538 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:07.538 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:07.538 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:07.538 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:07.538 07:03:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:07.538 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:07.538 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:07.538 altname enp217s0f0np0 00:14:07.538 altname ens818f0np0 00:14:07.538 inet 192.168.100.8/24 scope global mlx_0_0 00:14:07.538 valid_lft forever preferred_lft forever 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:07.538 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:07.538 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:07.538 altname enp217s0f1np1 00:14:07.538 altname ens818f1np1 00:14:07.538 inet 192.168.100.9/24 scope global mlx_0_1 00:14:07.538 valid_lft forever preferred_lft forever 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:07.538 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:07.539 192.168.100.9' 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:07.539 192.168.100.9' 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:07.539 192.168.100.9' 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:07.539 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1568788 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1568788 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1568788 ']' 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.798 07:03:22 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:07.798 [2024-07-24 07:03:22.285039] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:07.798 [2024-07-24 07:03:22.285146] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.798 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.055 [2024-07-24 07:03:22.434991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.055 [2024-07-24 07:03:22.636574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.055 [2024-07-24 07:03:22.636620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.055 [2024-07-24 07:03:22.636640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.055 [2024-07-24 07:03:22.636670] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.055 [2024-07-24 07:03:22.636681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.055 [2024-07-24 07:03:22.636855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:08.055 [2024-07-24 07:03:22.636936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:08.055 [2024-07-24 07:03:22.637000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.055 [2024-07-24 07:03:22.637028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:08.620 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.620 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:08.620 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.620 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.620 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.620 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.620 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:08.620 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.620 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.620 [2024-07-24 07:03:23.139052] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f084e2ee940) succeed. 00:14:08.620 [2024-07-24 07:03:23.148203] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f084e2aa940) succeed. 00:14:08.879 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.879 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:08.879 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.879 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:09.137 Malloc0 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:09.137 [2024-07-24 07:03:23.559061] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:09.137 { 00:14:09.137 "params": { 00:14:09.137 "name": "Nvme$subsystem", 00:14:09.137 "trtype": "$TEST_TRANSPORT", 00:14:09.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:09.137 "adrfam": "ipv4", 00:14:09.137 "trsvcid": "$NVMF_PORT", 00:14:09.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:09.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:09.137 "hdgst": ${hdgst:-false}, 00:14:09.137 "ddgst": ${ddgst:-false} 00:14:09.137 }, 00:14:09.137 "method": "bdev_nvme_attach_controller" 00:14:09.137 } 00:14:09.137 EOF 00:14:09.137 )") 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:09.137 07:03:23 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:09.137 "params": { 00:14:09.137 "name": "Nvme1", 00:14:09.137 "trtype": "rdma", 00:14:09.137 "traddr": "192.168.100.8", 00:14:09.137 "adrfam": "ipv4", 00:14:09.137 "trsvcid": "4420", 00:14:09.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:09.137 "hdgst": false, 00:14:09.137 "ddgst": false 00:14:09.137 }, 00:14:09.137 "method": "bdev_nvme_attach_controller" 00:14:09.137 }' 00:14:09.137 [2024-07-24 07:03:23.632736] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:09.137 [2024-07-24 07:03:23.632825] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1568969 ] 00:14:09.137 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.394 [2024-07-24 07:03:23.783533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.394 [2024-07-24 07:03:24.015071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.394 [2024-07-24 07:03:24.015133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.394 [2024-07-24 07:03:24.015141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.958 I/O targets: 00:14:09.958 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:09.958 00:14:09.958 00:14:09.958 CUnit - A unit testing framework for C - Version 2.1-3 00:14:09.958 http://cunit.sourceforge.net/ 00:14:09.958 00:14:09.958 00:14:09.958 Suite: bdevio tests on: Nvme1n1 00:14:09.958 Test: blockdev write read block ...passed 00:14:09.958 Test: blockdev write zeroes read block ...passed 00:14:09.958 Test: blockdev write zeroes read no split ...passed 00:14:09.958 Test: blockdev write zeroes read split ...passed 00:14:09.958 Test: blockdev write zeroes read split partial ...passed 00:14:09.958 Test: blockdev reset ...[2024-07-24 07:03:24.570637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:10.216 [2024-07-24 07:03:24.607088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:10.216 [2024-07-24 07:03:24.639827] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:10.216 passed 00:14:10.216 Test: blockdev write read 8 blocks ...passed 00:14:10.216 Test: blockdev write read size > 128k ...passed 00:14:10.216 Test: blockdev write read invalid size ...passed 00:14:10.216 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:10.216 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:10.216 Test: blockdev write read max offset ...passed 00:14:10.216 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:10.216 Test: blockdev writev readv 8 blocks ...passed 00:14:10.216 Test: blockdev writev readv 30 x 1block ...passed 00:14:10.216 Test: blockdev writev readv block ...passed 00:14:10.216 Test: blockdev writev readv size > 128k ...passed 00:14:10.216 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:10.216 Test: blockdev comparev and writev ...[2024-07-24 07:03:24.645317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.216 [2024-07-24 07:03:24.645358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.645377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.216 [2024-07-24 07:03:24.645396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.645603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.216 [2024-07-24 07:03:24.645623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.645644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.216 [2024-07-24 07:03:24.645659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.645835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.216 [2024-07-24 07:03:24.645854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.645868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.216 [2024-07-24 07:03:24.645883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.646061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.216 [2024-07-24 07:03:24.646082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.646096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.216 [2024-07-24 07:03:24.646113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:10.216 passed 00:14:10.216 Test: blockdev nvme passthru rw ...passed 00:14:10.216 Test: blockdev nvme passthru vendor specific ...[2024-07-24 07:03:24.646443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:10.216 [2024-07-24 07:03:24.646470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.646530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:10.216 [2024-07-24 07:03:24.646546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.646597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:10.216 [2024-07-24 07:03:24.646614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:10.216 [2024-07-24 07:03:24.646868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:10.216 [2024-07-24 07:03:24.646887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:10.216 passed 00:14:10.216 Test: blockdev nvme admin passthru ...passed 00:14:10.216 Test: blockdev copy ...passed 00:14:10.216 00:14:10.216 Run Summary: Type Total Ran Passed Failed Inactive 00:14:10.216 suites 1 1 n/a 0 0 00:14:10.216 tests 23 23 23 0 0 00:14:10.216 asserts 152 152 152 0 n/a 00:14:10.216 00:14:10.216 Elapsed time = 0.404 seconds 00:14:11.148 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.148 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.148 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.148 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.148 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:11.148 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:11.148 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:11.148 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:11.148 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:11.149 rmmod nvme_rdma 00:14:11.149 rmmod nvme_fabrics 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1568788 ']' 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1568788 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1568788 ']' 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1568788 00:14:11.149 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:11.406 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.406 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1568788 00:14:11.406 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:11.406 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:11.406 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1568788' 00:14:11.406 killing process with pid 1568788 00:14:11.406 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1568788 00:14:11.406 07:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1568788 00:14:13.303 07:03:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.303 07:03:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:13.303 00:14:13.303 real 0m14.048s 00:14:13.303 user 0m25.125s 00:14:13.303 sys 0m7.205s 00:14:13.303 07:03:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.303 07:03:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:13.303 ************************************ 00:14:13.303 END TEST nvmf_bdevio 00:14:13.303 ************************************ 00:14:13.303 07:03:27 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:13.303 00:14:13.303 real 5m1.078s 00:14:13.303 user 12m29.453s 00:14:13.303 sys 1m57.561s 00:14:13.303 07:03:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.303 07:03:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:13.303 ************************************ 00:14:13.303 END TEST nvmf_target_core 00:14:13.303 ************************************ 00:14:13.562 07:03:27 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:14:13.562 07:03:27 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:13.562 07:03:27 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.562 07:03:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:13.562 ************************************ 00:14:13.562 START TEST nvmf_target_extra 00:14:13.562 ************************************ 00:14:13.562 07:03:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:14:13.562 * Looking for test storage... 00:14:13.562 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.562 ************************************ 00:14:13.562 START TEST nvmf_example 00:14:13.562 ************************************ 00:14:13.562 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:14:13.821 * Looking for test storage... 00:14:13.821 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.821 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.822 07:03:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:23.797 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:23.797 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:23.798 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:23.798 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:23.798 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # uname 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:23.798 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:23.798 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:23.798 altname enp217s0f0np0 00:14:23.798 altname ens818f0np0 00:14:23.798 inet 192.168.100.8/24 scope global mlx_0_0 00:14:23.798 valid_lft forever preferred_lft forever 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:23.798 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:23.798 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:23.798 altname enp217s0f1np1 00:14:23.798 altname ens818f1np1 00:14:23.798 inet 192.168.100.9/24 scope global mlx_0_1 00:14:23.798 valid_lft forever preferred_lft forever 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:14:23.798 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:23.799 192.168.100.9' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:23.799 192.168.100.9' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:23.799 192.168.100.9' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1573961 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1573961 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1573961 ']' 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.799 07:03:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.799 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.799 07:03:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.799 07:03:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:14:23.799 07:03:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:23.799 07:03:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:23.799 07:03:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.799 07:03:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:23.799 07:03:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.799 07:03:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:23.799 07:03:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:23.799 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.066 Initializing NVMe Controllers 00:14:36.066 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:14:36.066 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:36.066 Initialization complete. Launching workers. 00:14:36.066 ======================================================== 00:14:36.066 Latency(us) 00:14:36.066 Device Information : IOPS MiB/s Average min max 00:14:36.066 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 22900.91 89.46 2794.56 734.69 15961.98 00:14:36.066 ======================================================== 00:14:36.066 Total : 22900.91 89.46 2794.56 734.69 15961.98 00:14:36.066 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:36.066 rmmod nvme_rdma 00:14:36.066 rmmod nvme_fabrics 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1573961 ']' 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1573961 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1573961 ']' 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1573961 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1573961 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1573961' 00:14:36.066 killing process with pid 1573961 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 1573961 00:14:36.066 07:03:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 1573961 00:14:37.002 nvmf threads initialize successfully 00:14:37.002 bdev subsystem init successfully 00:14:37.002 created a nvmf target service 00:14:37.002 create targets's poll groups done 00:14:37.002 all subsystems of target started 00:14:37.002 nvmf target is running 00:14:37.002 all subsystems of target stopped 00:14:37.002 destroy targets's poll groups done 00:14:37.002 destroyed the nvmf target service 00:14:37.002 bdev subsystem finish successfully 00:14:37.002 nvmf threads destroy successfully 00:14:37.002 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.002 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:37.002 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:37.002 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:37.002 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:37.002 00:14:37.002 real 0m23.444s 00:14:37.002 user 0m58.502s 00:14:37.002 sys 0m7.278s 00:14:37.002 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.002 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:37.002 ************************************ 00:14:37.002 END TEST nvmf_example 00:14:37.002 ************************************ 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.263 ************************************ 00:14:37.263 START TEST nvmf_filesystem 00:14:37.263 ************************************ 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:14:37.263 * Looking for test storage... 00:14:37.263 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:14:37.263 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:37.264 #define SPDK_CONFIG_H 00:14:37.264 #define SPDK_CONFIG_APPS 1 00:14:37.264 #define SPDK_CONFIG_ARCH native 00:14:37.264 #define SPDK_CONFIG_ASAN 1 00:14:37.264 #undef SPDK_CONFIG_AVAHI 00:14:37.264 #undef SPDK_CONFIG_CET 00:14:37.264 #define SPDK_CONFIG_COVERAGE 1 00:14:37.264 #define SPDK_CONFIG_CROSS_PREFIX 00:14:37.264 #undef SPDK_CONFIG_CRYPTO 00:14:37.264 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:37.264 #undef SPDK_CONFIG_CUSTOMOCF 00:14:37.264 #undef SPDK_CONFIG_DAOS 00:14:37.264 #define SPDK_CONFIG_DAOS_DIR 00:14:37.264 #define SPDK_CONFIG_DEBUG 1 00:14:37.264 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:37.264 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:14:37.264 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:37.264 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:37.264 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:37.264 #undef SPDK_CONFIG_DPDK_UADK 00:14:37.264 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:14:37.264 #define SPDK_CONFIG_EXAMPLES 1 00:14:37.264 #undef SPDK_CONFIG_FC 00:14:37.264 #define SPDK_CONFIG_FC_PATH 00:14:37.264 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:37.264 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:37.264 #undef SPDK_CONFIG_FUSE 00:14:37.264 #undef SPDK_CONFIG_FUZZER 00:14:37.264 #define SPDK_CONFIG_FUZZER_LIB 00:14:37.264 #undef SPDK_CONFIG_GOLANG 00:14:37.264 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:37.264 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:37.264 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:37.264 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:37.264 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:37.264 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:37.264 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:37.264 #define SPDK_CONFIG_IDXD 1 00:14:37.264 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:37.264 #undef SPDK_CONFIG_IPSEC_MB 00:14:37.264 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:37.264 #define SPDK_CONFIG_ISAL 1 00:14:37.264 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:37.264 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:37.264 #define SPDK_CONFIG_LIBDIR 00:14:37.264 #undef SPDK_CONFIG_LTO 00:14:37.264 #define SPDK_CONFIG_MAX_LCORES 128 00:14:37.264 #define SPDK_CONFIG_NVME_CUSE 1 00:14:37.264 #undef SPDK_CONFIG_OCF 00:14:37.264 #define SPDK_CONFIG_OCF_PATH 00:14:37.264 #define SPDK_CONFIG_OPENSSL_PATH 00:14:37.264 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:37.264 #define SPDK_CONFIG_PGO_DIR 00:14:37.264 #undef SPDK_CONFIG_PGO_USE 00:14:37.264 #define SPDK_CONFIG_PREFIX /usr/local 00:14:37.264 #undef SPDK_CONFIG_RAID5F 00:14:37.264 #undef SPDK_CONFIG_RBD 00:14:37.264 #define SPDK_CONFIG_RDMA 1 00:14:37.264 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:37.264 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:37.264 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:37.264 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:37.264 #define SPDK_CONFIG_SHARED 1 00:14:37.264 #undef SPDK_CONFIG_SMA 00:14:37.264 #define SPDK_CONFIG_TESTS 1 00:14:37.264 #undef SPDK_CONFIG_TSAN 00:14:37.264 #define SPDK_CONFIG_UBLK 1 00:14:37.264 #define SPDK_CONFIG_UBSAN 1 00:14:37.264 #undef SPDK_CONFIG_UNIT_TESTS 00:14:37.264 #undef SPDK_CONFIG_URING 00:14:37.264 #define SPDK_CONFIG_URING_PATH 00:14:37.264 #undef SPDK_CONFIG_URING_ZNS 00:14:37.264 #undef SPDK_CONFIG_USDT 00:14:37.264 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:37.264 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:37.264 #undef SPDK_CONFIG_VFIO_USER 00:14:37.264 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:37.264 #define SPDK_CONFIG_VHOST 1 00:14:37.264 #define SPDK_CONFIG_VIRTIO 1 00:14:37.264 #undef SPDK_CONFIG_VTUNE 00:14:37.264 #define SPDK_CONFIG_VTUNE_DIR 00:14:37.264 #define SPDK_CONFIG_WERROR 1 00:14:37.264 #define SPDK_CONFIG_WPDK_DIR 00:14:37.264 #undef SPDK_CONFIG_XNVME 00:14:37.264 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.264 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:14:37.265 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:37.266 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1576476 ]] 00:14:37.267 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1576476 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.fW1mUn 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fW1mUn/tests/target /tmp/spdk.fW1mUn 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=951066624 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4333363200 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=50743918592 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742276608 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10998358016 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30856507392 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871138304 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=14630912 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12325031936 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348456960 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=23425024 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30867001344 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871138304 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4136960 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174220288 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174224384 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:14:37.527 * Looking for test storage... 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:14:37.527 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=50743918592 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13212950528 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:37.528 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:37.528 07:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:14:45.633 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:45.634 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:45.634 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:45.634 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:45.634 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:14:45.634 07:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:45.634 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:45.634 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:45.634 altname enp217s0f0np0 00:14:45.634 altname ens818f0np0 00:14:45.634 inet 192.168.100.8/24 scope global mlx_0_0 00:14:45.634 valid_lft forever preferred_lft forever 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:45.634 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:45.635 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:45.635 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:45.635 altname enp217s0f1np1 00:14:45.635 altname ens818f1np1 00:14:45.635 inet 192.168.100.9/24 scope global mlx_0_1 00:14:45.635 valid_lft forever preferred_lft forever 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:45.635 192.168.100.9' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:45.635 192.168.100.9' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:45.635 192.168.100.9' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.635 ************************************ 00:14:45.635 START TEST nvmf_filesystem_no_in_capsule 00:14:45.635 ************************************ 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1580568 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1580568 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1580568 ']' 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.635 07:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:45.894 [2024-07-24 07:04:00.275993] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:45.894 [2024-07-24 07:04:00.276089] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.894 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.894 [2024-07-24 07:04:00.422257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.152 [2024-07-24 07:04:00.627602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.152 [2024-07-24 07:04:00.627651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.152 [2024-07-24 07:04:00.627666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.152 [2024-07-24 07:04:00.627676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.152 [2024-07-24 07:04:00.627688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.152 [2024-07-24 07:04:00.627813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.152 [2024-07-24 07:04:00.627921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.152 [2024-07-24 07:04:00.627982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.152 [2024-07-24 07:04:00.627994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:46.718 [2024-07-24 07:04:01.099808] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:14:46.718 [2024-07-24 07:04:01.125167] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fa7e99b0940) succeed. 00:14:46.718 [2024-07-24 07:04:01.134982] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fa7e996a940) succeed. 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.718 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.650 Malloc1 00:14:47.650 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.650 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:47.650 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.651 [2024-07-24 07:04:01.980918] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.651 07:04:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:14:47.651 { 00:14:47.651 "name": "Malloc1", 00:14:47.651 "aliases": [ 00:14:47.651 "68e9e8d8-f9be-45a8-b517-aafa1acaec6f" 00:14:47.651 ], 00:14:47.651 "product_name": "Malloc disk", 00:14:47.651 "block_size": 512, 00:14:47.651 "num_blocks": 1048576, 00:14:47.651 "uuid": "68e9e8d8-f9be-45a8-b517-aafa1acaec6f", 00:14:47.651 "assigned_rate_limits": { 00:14:47.651 "rw_ios_per_sec": 0, 00:14:47.651 "rw_mbytes_per_sec": 0, 00:14:47.651 "r_mbytes_per_sec": 0, 00:14:47.651 "w_mbytes_per_sec": 0 00:14:47.651 }, 00:14:47.651 "claimed": true, 00:14:47.651 "claim_type": "exclusive_write", 00:14:47.651 "zoned": false, 00:14:47.651 "supported_io_types": { 00:14:47.651 "read": true, 00:14:47.651 "write": true, 00:14:47.651 "unmap": true, 00:14:47.651 "flush": true, 00:14:47.651 "reset": true, 00:14:47.651 "nvme_admin": false, 00:14:47.651 "nvme_io": false, 00:14:47.651 "nvme_io_md": false, 00:14:47.651 "write_zeroes": true, 00:14:47.651 "zcopy": true, 00:14:47.651 "get_zone_info": false, 00:14:47.651 "zone_management": false, 00:14:47.651 "zone_append": false, 00:14:47.651 "compare": false, 00:14:47.651 "compare_and_write": false, 00:14:47.651 "abort": true, 00:14:47.651 "seek_hole": false, 00:14:47.651 "seek_data": false, 00:14:47.651 "copy": true, 00:14:47.651 "nvme_iov_md": false 00:14:47.651 }, 00:14:47.651 "memory_domains": [ 00:14:47.651 { 00:14:47.651 "dma_device_id": "system", 00:14:47.651 "dma_device_type": 1 00:14:47.651 }, 00:14:47.651 { 00:14:47.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.651 "dma_device_type": 2 00:14:47.651 } 00:14:47.651 ], 00:14:47.651 "driver_specific": {} 00:14:47.651 } 00:14:47.651 ]' 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:47.651 07:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:48.584 07:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:48.584 07:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:14:48.584 07:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.584 07:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:14:48.584 07:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:14:50.479 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:14:50.479 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:14:50.479 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.479 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:14:50.479 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.479 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:14:50.479 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:50.479 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:50.736 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:50.737 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:50.737 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:50.737 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:50.737 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:50.737 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:50.737 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:50.737 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:50.737 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:50.737 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:50.994 07:04:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:51.928 ************************************ 00:14:51.928 START TEST filesystem_ext4 00:14:51.928 ************************************ 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:51.928 mke2fs 1.46.5 (30-Dec-2021) 00:14:51.928 Discarding device blocks: 0/522240 done 00:14:51.928 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:51.928 Filesystem UUID: 65385266-2ac0-400f-98bc-086b3d277eaf 00:14:51.928 Superblock backups stored on blocks: 00:14:51.928 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:51.928 00:14:51.928 Allocating group tables: 0/64 done 00:14:51.928 Writing inode tables: 0/64 done 00:14:51.928 Creating journal (8192 blocks): done 00:14:51.928 Writing superblocks and filesystem accounting information: 0/64 done 00:14:51.928 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:14:51.928 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1580568 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:52.187 00:14:52.187 real 0m0.199s 00:14:52.187 user 0m0.026s 00:14:52.187 sys 0m0.081s 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:52.187 ************************************ 00:14:52.187 END TEST filesystem_ext4 00:14:52.187 ************************************ 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:52.187 ************************************ 00:14:52.187 START TEST filesystem_btrfs 00:14:52.187 ************************************ 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:14:52.187 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:52.445 btrfs-progs v6.6.2 00:14:52.446 See https://btrfs.readthedocs.io for more information. 00:14:52.446 00:14:52.446 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:52.446 NOTE: several default settings have changed in version 5.15, please make sure 00:14:52.446 this does not affect your deployments: 00:14:52.446 - DUP for metadata (-m dup) 00:14:52.446 - enabled no-holes (-O no-holes) 00:14:52.446 - enabled free-space-tree (-R free-space-tree) 00:14:52.446 00:14:52.446 Label: (null) 00:14:52.446 UUID: d1f93b4e-c1a8-43f1-a9ba-dfeb33d5696e 00:14:52.446 Node size: 16384 00:14:52.446 Sector size: 4096 00:14:52.446 Filesystem size: 510.00MiB 00:14:52.446 Block group profiles: 00:14:52.446 Data: single 8.00MiB 00:14:52.446 Metadata: DUP 32.00MiB 00:14:52.446 System: DUP 8.00MiB 00:14:52.446 SSD detected: yes 00:14:52.446 Zoned device: no 00:14:52.446 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:52.446 Runtime features: free-space-tree 00:14:52.446 Checksum: crc32c 00:14:52.446 Number of devices: 1 00:14:52.446 Devices: 00:14:52.446 ID SIZE PATH 00:14:52.446 1 510.00MiB /dev/nvme0n1p1 00:14:52.446 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1580568 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:52.446 07:04:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:52.446 00:14:52.446 real 0m0.268s 00:14:52.446 user 0m0.038s 00:14:52.446 sys 0m0.133s 00:14:52.446 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.446 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:52.446 ************************************ 00:14:52.446 END TEST filesystem_btrfs 00:14:52.446 ************************************ 00:14:52.446 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:52.446 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:52.446 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.446 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:52.704 ************************************ 00:14:52.704 START TEST filesystem_xfs 00:14:52.704 ************************************ 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:14:52.704 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:52.705 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:52.705 = sectsz=512 attr=2, projid32bit=1 00:14:52.705 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:52.705 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:52.705 data = bsize=4096 blocks=130560, imaxpct=25 00:14:52.705 = sunit=0 swidth=0 blks 00:14:52.705 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:52.705 log =internal log bsize=4096 blocks=16384, version=2 00:14:52.705 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:52.705 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:52.705 Discarding blocks...Done. 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1580568 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:52.705 00:14:52.705 real 0m0.225s 00:14:52.705 user 0m0.032s 00:14:52.705 sys 0m0.080s 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.705 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:52.705 ************************************ 00:14:52.705 END TEST filesystem_xfs 00:14:52.705 ************************************ 00:14:52.963 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:52.963 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:52.963 07:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1580568 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1580568 ']' 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1580568 00:14:53.896 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:14:53.897 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.897 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1580568 00:14:53.897 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:53.897 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:53.897 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1580568' 00:14:53.897 killing process with pid 1580568 00:14:53.897 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1580568 00:14:53.897 07:04:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1580568 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:57.235 00:14:57.235 real 0m11.213s 00:14:57.235 user 0m41.246s 00:14:57.235 sys 0m1.476s 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.235 ************************************ 00:14:57.235 END TEST nvmf_filesystem_no_in_capsule 00:14:57.235 ************************************ 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:57.235 ************************************ 00:14:57.235 START TEST nvmf_filesystem_in_capsule 00:14:57.235 ************************************ 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1582666 00:14:57.235 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.236 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1582666 00:14:57.236 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1582666 ']' 00:14:57.236 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.236 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.236 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.236 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.236 07:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.236 [2024-07-24 07:04:11.534961] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:57.236 [2024-07-24 07:04:11.535050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.236 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.236 [2024-07-24 07:04:11.681024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.494 [2024-07-24 07:04:11.897525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.494 [2024-07-24 07:04:11.897570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.494 [2024-07-24 07:04:11.897584] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.494 [2024-07-24 07:04:11.897595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.494 [2024-07-24 07:04:11.897606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.494 [2024-07-24 07:04:11.897723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.494 [2024-07-24 07:04:11.897837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.494 [2024-07-24 07:04:11.897898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.494 [2024-07-24 07:04:11.897910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.752 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:58.010 [2024-07-24 07:04:12.398935] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fb3bcdb4940) succeed. 00:14:58.010 [2024-07-24 07:04:12.408603] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fb3bcd70940) succeed. 00:14:58.268 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.268 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:58.268 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.268 07:04:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:58.834 Malloc1 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:58.834 [2024-07-24 07:04:13.411581] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.834 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:14:58.834 { 00:14:58.834 "name": "Malloc1", 00:14:58.834 "aliases": [ 00:14:58.834 "9aee7ca3-e436-4195-90fb-728571e8c0e8" 00:14:58.834 ], 00:14:58.834 "product_name": "Malloc disk", 00:14:58.834 "block_size": 512, 00:14:58.834 "num_blocks": 1048576, 00:14:58.834 "uuid": "9aee7ca3-e436-4195-90fb-728571e8c0e8", 00:14:58.834 "assigned_rate_limits": { 00:14:58.834 "rw_ios_per_sec": 0, 00:14:58.834 "rw_mbytes_per_sec": 0, 00:14:58.834 "r_mbytes_per_sec": 0, 00:14:58.834 "w_mbytes_per_sec": 0 00:14:58.834 }, 00:14:58.834 "claimed": true, 00:14:58.834 "claim_type": "exclusive_write", 00:14:58.834 "zoned": false, 00:14:58.834 "supported_io_types": { 00:14:58.834 "read": true, 00:14:58.834 "write": true, 00:14:58.835 "unmap": true, 00:14:58.835 "flush": true, 00:14:58.835 "reset": true, 00:14:58.835 "nvme_admin": false, 00:14:58.835 "nvme_io": false, 00:14:58.835 "nvme_io_md": false, 00:14:58.835 "write_zeroes": true, 00:14:58.835 "zcopy": true, 00:14:58.835 "get_zone_info": false, 00:14:58.835 "zone_management": false, 00:14:58.835 "zone_append": false, 00:14:58.835 "compare": false, 00:14:58.835 "compare_and_write": false, 00:14:58.835 "abort": true, 00:14:58.835 "seek_hole": false, 00:14:58.835 "seek_data": false, 00:14:58.835 "copy": true, 00:14:58.835 "nvme_iov_md": false 00:14:58.835 }, 00:14:58.835 "memory_domains": [ 00:14:58.835 { 00:14:58.835 "dma_device_id": "system", 00:14:58.835 "dma_device_type": 1 00:14:58.835 }, 00:14:58.835 { 00:14:58.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.835 "dma_device_type": 2 00:14:58.835 } 00:14:58.835 ], 00:14:58.835 "driver_specific": {} 00:14:58.835 } 00:14:58.835 ]' 00:14:58.835 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:14:59.092 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:14:59.092 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:14:59.092 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:14:59.092 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:14:59.092 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:14:59.092 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:59.092 07:04:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:00.020 07:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.020 07:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:15:00.020 07:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.020 07:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:00.020 07:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:15:01.913 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:01.913 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:01.913 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.913 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:01.913 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.913 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:15:01.913 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:01.913 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:02.171 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:02.428 07:04:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:03.359 ************************************ 00:15:03.359 START TEST filesystem_in_capsule_ext4 00:15:03.359 ************************************ 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:15:03.359 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:15:03.360 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:15:03.360 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:15:03.360 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:03.360 mke2fs 1.46.5 (30-Dec-2021) 00:15:03.360 Discarding device blocks: 0/522240 done 00:15:03.360 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:03.360 Filesystem UUID: 0bfcc2dc-cac2-44d0-a920-70a8ac4e2c71 00:15:03.360 Superblock backups stored on blocks: 00:15:03.360 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:03.360 00:15:03.360 Allocating group tables: 0/64 done 00:15:03.360 Writing inode tables: 0/64 done 00:15:03.360 Creating journal (8192 blocks): done 00:15:03.617 Writing superblocks and filesystem accounting information: 0/64 done 00:15:03.617 00:15:03.617 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:15:03.617 07:04:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1582666 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:03.617 00:15:03.617 real 0m0.199s 00:15:03.617 user 0m0.027s 00:15:03.617 sys 0m0.079s 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:03.617 ************************************ 00:15:03.617 END TEST filesystem_in_capsule_ext4 00:15:03.617 ************************************ 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.617 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:03.617 ************************************ 00:15:03.617 START TEST filesystem_in_capsule_btrfs 00:15:03.618 ************************************ 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:15:03.618 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:03.876 btrfs-progs v6.6.2 00:15:03.876 See https://btrfs.readthedocs.io for more information. 00:15:03.876 00:15:03.876 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:03.876 NOTE: several default settings have changed in version 5.15, please make sure 00:15:03.876 this does not affect your deployments: 00:15:03.876 - DUP for metadata (-m dup) 00:15:03.876 - enabled no-holes (-O no-holes) 00:15:03.876 - enabled free-space-tree (-R free-space-tree) 00:15:03.876 00:15:03.876 Label: (null) 00:15:03.876 UUID: 57ef41ef-8932-4ad8-8796-352a7308abe3 00:15:03.876 Node size: 16384 00:15:03.876 Sector size: 4096 00:15:03.876 Filesystem size: 510.00MiB 00:15:03.876 Block group profiles: 00:15:03.876 Data: single 8.00MiB 00:15:03.876 Metadata: DUP 32.00MiB 00:15:03.876 System: DUP 8.00MiB 00:15:03.876 SSD detected: yes 00:15:03.876 Zoned device: no 00:15:03.876 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:15:03.876 Runtime features: free-space-tree 00:15:03.876 Checksum: crc32c 00:15:03.876 Number of devices: 1 00:15:03.876 Devices: 00:15:03.876 ID SIZE PATH 00:15:03.876 1 510.00MiB /dev/nvme0n1p1 00:15:03.876 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1582666 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:03.876 00:15:03.876 real 0m0.274s 00:15:03.876 user 0m0.041s 00:15:03.876 sys 0m0.131s 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:03.876 ************************************ 00:15:03.876 END TEST filesystem_in_capsule_btrfs 00:15:03.876 ************************************ 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.876 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:04.134 ************************************ 00:15:04.134 START TEST filesystem_in_capsule_xfs 00:15:04.134 ************************************ 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:04.134 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:04.134 = sectsz=512 attr=2, projid32bit=1 00:15:04.134 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:04.134 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:04.134 data = bsize=4096 blocks=130560, imaxpct=25 00:15:04.134 = sunit=0 swidth=0 blks 00:15:04.134 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:04.134 log =internal log bsize=4096 blocks=16384, version=2 00:15:04.134 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:04.134 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:04.134 Discarding blocks...Done. 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1582666 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:04.134 00:15:04.134 real 0m0.211s 00:15:04.134 user 0m0.031s 00:15:04.134 sys 0m0.074s 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.134 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:04.134 ************************************ 00:15:04.134 END TEST filesystem_in_capsule_xfs 00:15:04.134 ************************************ 00:15:04.391 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:04.392 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:04.392 07:04:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1582666 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1582666 ']' 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1582666 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1582666 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1582666' 00:15:05.325 killing process with pid 1582666 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1582666 00:15:05.325 07:04:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1582666 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:08.607 00:15:08.607 real 0m11.724s 00:15:08.607 user 0m42.819s 00:15:08.607 sys 0m1.503s 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 ************************************ 00:15:08.607 END TEST nvmf_filesystem_in_capsule 00:15:08.607 ************************************ 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.607 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:08.865 rmmod nvme_rdma 00:15:08.865 rmmod nvme_fabrics 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:08.865 00:15:08.865 real 0m31.613s 00:15:08.865 user 1m26.560s 00:15:08.865 sys 0m9.421s 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:08.865 ************************************ 00:15:08.865 END TEST nvmf_filesystem 00:15:08.865 ************************************ 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:08.865 ************************************ 00:15:08.865 START TEST nvmf_target_discovery 00:15:08.865 ************************************ 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:15:08.865 * Looking for test storage... 00:15:08.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.865 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:08.866 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.122 07:04:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:17.302 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:17.302 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:17.302 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:17.302 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:17.302 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:17.303 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.303 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:17.303 altname enp217s0f0np0 00:15:17.303 altname ens818f0np0 00:15:17.303 inet 192.168.100.8/24 scope global mlx_0_0 00:15:17.303 valid_lft forever preferred_lft forever 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:17.303 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.303 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:17.303 altname enp217s0f1np1 00:15:17.303 altname ens818f1np1 00:15:17.303 inet 192.168.100.9/24 scope global mlx_0_1 00:15:17.303 valid_lft forever preferred_lft forever 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:17.303 192.168.100.9' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:17.303 192.168.100.9' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:17.303 192.168.100.9' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1588834 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1588834 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1588834 ']' 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.303 07:04:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:17.303 [2024-07-24 07:04:31.892457] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:17.303 [2024-07-24 07:04:31.892554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.562 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.562 [2024-07-24 07:04:32.039153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.819 [2024-07-24 07:04:32.241618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.819 [2024-07-24 07:04:32.241666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.820 [2024-07-24 07:04:32.241680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.820 [2024-07-24 07:04:32.241690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.820 [2024-07-24 07:04:32.241701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.820 [2024-07-24 07:04:32.241823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.820 [2024-07-24 07:04:32.241956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.820 [2024-07-24 07:04:32.242038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.820 [2024-07-24 07:04:32.242049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.077 07:04:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.077 07:04:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:18.077 07:04:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:18.077 07:04:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.077 07:04:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.334 07:04:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.334 07:04:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:18.334 07:04:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.334 07:04:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.334 [2024-07-24 07:04:32.743649] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f7e51ed6940) succeed. 00:15:18.334 [2024-07-24 07:04:32.752950] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f7e51e92940) succeed. 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 Null1 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 [2024-07-24 07:04:33.095465] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 Null2 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 Null3 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 Null4 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.593 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:15:18.852 00:15:18.852 Discovery Log Number of Records 6, Generation counter 6 00:15:18.852 =====Discovery Log Entry 0====== 00:15:18.852 trtype: rdma 00:15:18.852 adrfam: ipv4 00:15:18.852 subtype: current discovery subsystem 00:15:18.852 treq: not required 00:15:18.852 portid: 0 00:15:18.852 trsvcid: 4420 00:15:18.852 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:18.852 traddr: 192.168.100.8 00:15:18.852 eflags: explicit discovery connections, duplicate discovery information 00:15:18.852 rdma_prtype: not specified 00:15:18.852 rdma_qptype: connected 00:15:18.852 rdma_cms: rdma-cm 00:15:18.852 rdma_pkey: 0x0000 00:15:18.852 =====Discovery Log Entry 1====== 00:15:18.852 trtype: rdma 00:15:18.852 adrfam: ipv4 00:15:18.852 subtype: nvme subsystem 00:15:18.852 treq: not required 00:15:18.852 portid: 0 00:15:18.852 trsvcid: 4420 00:15:18.852 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:18.852 traddr: 192.168.100.8 00:15:18.852 eflags: none 00:15:18.852 rdma_prtype: not specified 00:15:18.852 rdma_qptype: connected 00:15:18.852 rdma_cms: rdma-cm 00:15:18.852 rdma_pkey: 0x0000 00:15:18.852 =====Discovery Log Entry 2====== 00:15:18.852 trtype: rdma 00:15:18.852 adrfam: ipv4 00:15:18.852 subtype: nvme subsystem 00:15:18.852 treq: not required 00:15:18.852 portid: 0 00:15:18.852 trsvcid: 4420 00:15:18.852 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:18.852 traddr: 192.168.100.8 00:15:18.852 eflags: none 00:15:18.852 rdma_prtype: not specified 00:15:18.852 rdma_qptype: connected 00:15:18.852 rdma_cms: rdma-cm 00:15:18.852 rdma_pkey: 0x0000 00:15:18.852 =====Discovery Log Entry 3====== 00:15:18.852 trtype: rdma 00:15:18.852 adrfam: ipv4 00:15:18.852 subtype: nvme subsystem 00:15:18.852 treq: not required 00:15:18.852 portid: 0 00:15:18.852 trsvcid: 4420 00:15:18.852 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:18.852 traddr: 192.168.100.8 00:15:18.852 eflags: none 00:15:18.852 rdma_prtype: not specified 00:15:18.852 rdma_qptype: connected 00:15:18.852 rdma_cms: rdma-cm 00:15:18.852 rdma_pkey: 0x0000 00:15:18.852 =====Discovery Log Entry 4====== 00:15:18.852 trtype: rdma 00:15:18.852 adrfam: ipv4 00:15:18.852 subtype: nvme subsystem 00:15:18.852 treq: not required 00:15:18.852 portid: 0 00:15:18.852 trsvcid: 4420 00:15:18.852 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:18.852 traddr: 192.168.100.8 00:15:18.852 eflags: none 00:15:18.852 rdma_prtype: not specified 00:15:18.852 rdma_qptype: connected 00:15:18.852 rdma_cms: rdma-cm 00:15:18.852 rdma_pkey: 0x0000 00:15:18.852 =====Discovery Log Entry 5====== 00:15:18.852 trtype: rdma 00:15:18.852 adrfam: ipv4 00:15:18.852 subtype: discovery subsystem referral 00:15:18.852 treq: not required 00:15:18.852 portid: 0 00:15:18.852 trsvcid: 4430 00:15:18.852 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:18.852 traddr: 192.168.100.8 00:15:18.852 eflags: none 00:15:18.852 rdma_prtype: unrecognized 00:15:18.852 rdma_qptype: unrecognized 00:15:18.852 rdma_cms: unrecognized 00:15:18.852 rdma_pkey: 0x0000 00:15:18.852 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:18.852 Perform nvmf subsystem discovery via RPC 00:15:18.852 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:18.852 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.852 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.852 [ 00:15:18.852 { 00:15:18.852 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.852 "subtype": "Discovery", 00:15:18.852 "listen_addresses": [ 00:15:18.852 { 00:15:18.852 "trtype": "RDMA", 00:15:18.852 "adrfam": "IPv4", 00:15:18.852 "traddr": "192.168.100.8", 00:15:18.852 "trsvcid": "4420" 00:15:18.852 } 00:15:18.852 ], 00:15:18.852 "allow_any_host": true, 00:15:18.852 "hosts": [] 00:15:18.852 }, 00:15:18.852 { 00:15:18.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.852 "subtype": "NVMe", 00:15:18.852 "listen_addresses": [ 00:15:18.852 { 00:15:18.852 "trtype": "RDMA", 00:15:18.852 "adrfam": "IPv4", 00:15:18.852 "traddr": "192.168.100.8", 00:15:18.852 "trsvcid": "4420" 00:15:18.852 } 00:15:18.852 ], 00:15:18.852 "allow_any_host": true, 00:15:18.852 "hosts": [], 00:15:18.852 "serial_number": "SPDK00000000000001", 00:15:18.852 "model_number": "SPDK bdev Controller", 00:15:18.852 "max_namespaces": 32, 00:15:18.852 "min_cntlid": 1, 00:15:18.852 "max_cntlid": 65519, 00:15:18.852 "namespaces": [ 00:15:18.852 { 00:15:18.852 "nsid": 1, 00:15:18.852 "bdev_name": "Null1", 00:15:18.852 "name": "Null1", 00:15:18.852 "nguid": "73F72A880EC24C20B0962A0A6DCEC667", 00:15:18.852 "uuid": "73f72a88-0ec2-4c20-b096-2a0a6dcec667" 00:15:18.852 } 00:15:18.852 ] 00:15:18.852 }, 00:15:18.852 { 00:15:18.852 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:18.852 "subtype": "NVMe", 00:15:18.852 "listen_addresses": [ 00:15:18.852 { 00:15:18.852 "trtype": "RDMA", 00:15:18.852 "adrfam": "IPv4", 00:15:18.852 "traddr": "192.168.100.8", 00:15:18.852 "trsvcid": "4420" 00:15:18.852 } 00:15:18.852 ], 00:15:18.852 "allow_any_host": true, 00:15:18.852 "hosts": [], 00:15:18.852 "serial_number": "SPDK00000000000002", 00:15:18.852 "model_number": "SPDK bdev Controller", 00:15:18.852 "max_namespaces": 32, 00:15:18.852 "min_cntlid": 1, 00:15:18.852 "max_cntlid": 65519, 00:15:18.852 "namespaces": [ 00:15:18.852 { 00:15:18.852 "nsid": 1, 00:15:18.852 "bdev_name": "Null2", 00:15:18.852 "name": "Null2", 00:15:18.852 "nguid": "30BC0A037A9540B69C74BDBAEC379BB5", 00:15:18.852 "uuid": "30bc0a03-7a95-40b6-9c74-bdbaec379bb5" 00:15:18.852 } 00:15:18.852 ] 00:15:18.852 }, 00:15:18.852 { 00:15:18.852 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:18.852 "subtype": "NVMe", 00:15:18.852 "listen_addresses": [ 00:15:18.852 { 00:15:18.852 "trtype": "RDMA", 00:15:18.852 "adrfam": "IPv4", 00:15:18.852 "traddr": "192.168.100.8", 00:15:18.852 "trsvcid": "4420" 00:15:18.852 } 00:15:18.852 ], 00:15:18.852 "allow_any_host": true, 00:15:18.852 "hosts": [], 00:15:18.852 "serial_number": "SPDK00000000000003", 00:15:18.852 "model_number": "SPDK bdev Controller", 00:15:18.852 "max_namespaces": 32, 00:15:18.852 "min_cntlid": 1, 00:15:18.852 "max_cntlid": 65519, 00:15:18.852 "namespaces": [ 00:15:18.852 { 00:15:18.852 "nsid": 1, 00:15:18.852 "bdev_name": "Null3", 00:15:18.852 "name": "Null3", 00:15:18.852 "nguid": "60B9E4E864C34685BC6F58E617E408B2", 00:15:18.852 "uuid": "60b9e4e8-64c3-4685-bc6f-58e617e408b2" 00:15:18.852 } 00:15:18.852 ] 00:15:18.852 }, 00:15:18.852 { 00:15:18.852 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:18.852 "subtype": "NVMe", 00:15:18.852 "listen_addresses": [ 00:15:18.852 { 00:15:18.852 "trtype": "RDMA", 00:15:18.852 "adrfam": "IPv4", 00:15:18.852 "traddr": "192.168.100.8", 00:15:18.852 "trsvcid": "4420" 00:15:18.852 } 00:15:18.852 ], 00:15:18.852 "allow_any_host": true, 00:15:18.852 "hosts": [], 00:15:18.852 "serial_number": "SPDK00000000000004", 00:15:18.852 "model_number": "SPDK bdev Controller", 00:15:18.852 "max_namespaces": 32, 00:15:18.852 "min_cntlid": 1, 00:15:18.852 "max_cntlid": 65519, 00:15:18.852 "namespaces": [ 00:15:18.852 { 00:15:18.852 "nsid": 1, 00:15:18.853 "bdev_name": "Null4", 00:15:18.853 "name": "Null4", 00:15:18.853 "nguid": "3388D0FDD5C2403297CA45D7B02E023D", 00:15:18.853 "uuid": "3388d0fd-d5c2-4032-97ca-45d7b02e023d" 00:15:18.853 } 00:15:18.853 ] 00:15:18.853 } 00:15:18.853 ] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:18.853 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:19.111 rmmod nvme_rdma 00:15:19.111 rmmod nvme_fabrics 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1588834 ']' 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1588834 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1588834 ']' 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1588834 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1588834 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1588834' 00:15:19.111 killing process with pid 1588834 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1588834 00:15:19.111 07:04:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1588834 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:21.008 00:15:21.008 real 0m12.055s 00:15:21.008 user 0m12.832s 00:15:21.008 sys 0m6.962s 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.008 ************************************ 00:15:21.008 END TEST nvmf_target_discovery 00:15:21.008 ************************************ 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.008 ************************************ 00:15:21.008 START TEST nvmf_referrals 00:15:21.008 ************************************ 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:15:21.008 * Looking for test storage... 00:15:21.008 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.008 07:04:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:29.121 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:29.121 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:29.121 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:29.121 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.121 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:29.122 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.122 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:29.122 altname enp217s0f0np0 00:15:29.122 altname ens818f0np0 00:15:29.122 inet 192.168.100.8/24 scope global mlx_0_0 00:15:29.122 valid_lft forever preferred_lft forever 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:29.122 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.122 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:29.122 altname enp217s0f1np1 00:15:29.122 altname ens818f1np1 00:15:29.122 inet 192.168.100.9/24 scope global mlx_0_1 00:15:29.122 valid_lft forever preferred_lft forever 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:29.122 192.168.100.9' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:29.122 192.168.100.9' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:29.122 192.168.100.9' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1593357 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1593357 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1593357 ']' 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.122 07:04:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.122 [2024-07-24 07:04:43.024109] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:29.122 [2024-07-24 07:04:43.024204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.122 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.122 [2024-07-24 07:04:43.171058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.122 [2024-07-24 07:04:43.367729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.122 [2024-07-24 07:04:43.367773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.122 [2024-07-24 07:04:43.367787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.122 [2024-07-24 07:04:43.367813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.122 [2024-07-24 07:04:43.367825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.122 [2024-07-24 07:04:43.367907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.122 [2024-07-24 07:04:43.367980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.122 [2024-07-24 07:04:43.368043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.122 [2024-07-24 07:04:43.368054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.380 07:04:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.380 07:04:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:15:29.380 07:04:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.380 07:04:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.380 07:04:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.380 07:04:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.380 07:04:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:29.380 07:04:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.380 07:04:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.380 [2024-07-24 07:04:43.869984] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f2df2800940) succeed. 00:15:29.380 [2024-07-24 07:04:43.879594] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f2df27bc940) succeed. 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.639 [2024-07-24 07:04:44.196530] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.639 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:29.898 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:15:30.157 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:30.416 07:04:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.416 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:30.416 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:30.416 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:30.416 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:30.416 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:30.416 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:15:30.416 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:30.416 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:15:30.675 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:15:30.933 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:30.934 rmmod nvme_rdma 00:15:30.934 rmmod nvme_fabrics 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1593357 ']' 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1593357 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1593357 ']' 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1593357 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.934 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1593357 00:15:31.192 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:31.192 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:31.192 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1593357' 00:15:31.192 killing process with pid 1593357 00:15:31.192 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1593357 00:15:31.192 07:04:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1593357 00:15:33.097 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:33.097 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:33.097 00:15:33.097 real 0m11.938s 00:15:33.097 user 0m16.297s 00:15:33.097 sys 0m6.539s 00:15:33.097 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:33.097 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:33.097 ************************************ 00:15:33.097 END TEST nvmf_referrals 00:15:33.097 ************************************ 00:15:33.097 07:04:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:15:33.097 07:04:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:33.097 07:04:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.097 07:04:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.097 ************************************ 00:15:33.097 START TEST nvmf_connect_disconnect 00:15:33.098 ************************************ 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:15:33.098 * Looking for test storage... 00:15:33.098 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.098 07:04:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:41.220 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:41.220 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:41.220 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:41.221 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:41.221 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:41.221 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:41.221 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:41.221 altname enp217s0f0np0 00:15:41.221 altname ens818f0np0 00:15:41.221 inet 192.168.100.8/24 scope global mlx_0_0 00:15:41.221 valid_lft forever preferred_lft forever 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:41.221 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:41.221 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:41.221 altname enp217s0f1np1 00:15:41.221 altname ens818f1np1 00:15:41.221 inet 192.168.100.9/24 scope global mlx_0_1 00:15:41.221 valid_lft forever preferred_lft forever 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:41.221 192.168.100.9' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:41.221 192.168.100.9' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:41.221 192.168.100.9' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1598161 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1598161 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1598161 ']' 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:41.221 07:04:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:41.481 [2024-07-24 07:04:55.871711] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:41.481 [2024-07-24 07:04:55.871802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.481 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.481 [2024-07-24 07:04:56.017973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:41.740 [2024-07-24 07:04:56.226409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.740 [2024-07-24 07:04:56.226456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.740 [2024-07-24 07:04:56.226470] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.740 [2024-07-24 07:04:56.226481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.740 [2024-07-24 07:04:56.226492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.740 [2024-07-24 07:04:56.226612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.740 [2024-07-24 07:04:56.226747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.740 [2024-07-24 07:04:56.226766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.740 [2024-07-24 07:04:56.226778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.310 [2024-07-24 07:04:56.701298] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:15:42.310 [2024-07-24 07:04:56.726227] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f0f640cd940) succeed. 00:15:42.310 [2024-07-24 07:04:56.735563] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f0f64087940) succeed. 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.310 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.569 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.569 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:42.569 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:42.569 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.569 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.569 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.569 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:42.569 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.569 07:04:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.569 07:04:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.569 07:04:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:42.569 07:04:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.569 07:04:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.569 [2024-07-24 07:04:57.007867] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:42.569 07:04:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.569 07:04:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:15:42.569 07:04:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:15:42.569 07:04:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:15:42.569 07:04:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:45.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:07.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:35.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:42.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:51.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:01.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:10.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:13.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:22.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:26.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:31.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:35.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:38.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:41.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:45.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:47.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:50.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:54.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:00.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:03.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:06.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:09.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:13.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:16.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:19.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:22.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:25.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:29.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:32.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:34.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:38.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:41.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:44.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:48.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:50.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:53.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:57.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:57.151 rmmod nvme_rdma 00:20:57.151 rmmod nvme_fabrics 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1598161 ']' 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1598161 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1598161 ']' 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1598161 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1598161 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1598161' 00:20:57.151 killing process with pid 1598161 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1598161 00:20:57.151 07:10:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1598161 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:59.052 00:20:59.052 real 5m25.777s 00:20:59.052 user 21m2.906s 00:20:59.052 sys 0m18.827s 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:59.052 ************************************ 00:20:59.052 END TEST nvmf_connect_disconnect 00:20:59.052 ************************************ 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:59.052 ************************************ 00:20:59.052 START TEST nvmf_multitarget 00:20:59.052 ************************************ 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:20:59.052 * Looking for test storage... 00:20:59.052 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.052 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.053 07:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:07.233 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:07.233 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.233 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:07.233 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:07.234 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:07.234 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:07.234 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:07.234 altname enp217s0f0np0 00:21:07.234 altname ens818f0np0 00:21:07.234 inet 192.168.100.8/24 scope global mlx_0_0 00:21:07.234 valid_lft forever preferred_lft forever 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:07.234 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:07.234 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:07.234 altname enp217s0f1np1 00:21:07.234 altname ens818f1np1 00:21:07.234 inet 192.168.100.9/24 scope global mlx_0_1 00:21:07.234 valid_lft forever preferred_lft forever 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:07.234 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:07.235 192.168.100.9' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:07.235 192.168.100.9' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:07.235 192.168.100.9' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1657675 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1657675 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1657675 ']' 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.235 07:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:07.495 [2024-07-24 07:10:21.906940] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:21:07.495 [2024-07-24 07:10:21.907039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.495 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.495 [2024-07-24 07:10:22.059345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.754 [2024-07-24 07:10:22.275122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.754 [2024-07-24 07:10:22.275170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.754 [2024-07-24 07:10:22.275185] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.754 [2024-07-24 07:10:22.275197] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.754 [2024-07-24 07:10:22.275208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.754 [2024-07-24 07:10:22.276662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.754 [2024-07-24 07:10:22.276685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.754 [2024-07-24 07:10:22.276743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.754 [2024-07-24 07:10:22.276754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:21:08.322 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:21:08.322 "nvmf_tgt_1" 00:21:08.581 07:10:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:21:08.581 "nvmf_tgt_2" 00:21:08.581 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:08.581 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:21:08.581 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:21:08.581 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:21:08.840 true 00:21:08.840 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:21:08.840 true 00:21:08.840 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:08.840 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:09.099 rmmod nvme_rdma 00:21:09.099 rmmod nvme_fabrics 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1657675 ']' 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1657675 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1657675 ']' 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1657675 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1657675 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1657675' 00:21:09.099 killing process with pid 1657675 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1657675 00:21:09.099 07:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1657675 00:21:10.477 07:10:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:10.477 07:10:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:10.477 00:21:10.477 real 0m11.528s 00:21:10.477 user 0m12.566s 00:21:10.477 sys 0m6.978s 00:21:10.477 07:10:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:10.477 07:10:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:10.477 ************************************ 00:21:10.477 END TEST nvmf_multitarget 00:21:10.477 ************************************ 00:21:10.477 07:10:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:21:10.477 07:10:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:10.477 07:10:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.477 07:10:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.477 ************************************ 00:21:10.477 START TEST nvmf_rpc 00:21:10.477 ************************************ 00:21:10.477 07:10:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:21:10.477 * Looking for test storage... 00:21:10.477 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.477 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.478 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.478 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.737 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.737 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.737 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.737 07:10:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:18.861 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.861 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:18.862 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:18.862 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:18.862 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:18.862 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:18.862 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:18.862 altname enp217s0f0np0 00:21:18.862 altname ens818f0np0 00:21:18.862 inet 192.168.100.8/24 scope global mlx_0_0 00:21:18.862 valid_lft forever preferred_lft forever 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:18.862 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:18.862 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:18.862 altname enp217s0f1np1 00:21:18.862 altname ens818f1np1 00:21:18.862 inet 192.168.100.9/24 scope global mlx_0_1 00:21:18.862 valid_lft forever preferred_lft forever 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:18.862 192.168.100.9' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:18.862 192.168.100.9' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:18.862 192.168.100.9' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1662140 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1662140 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1662140 ']' 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.862 07:10:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.862 [2024-07-24 07:10:33.421602] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:21:18.862 [2024-07-24 07:10:33.421703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.120 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.120 [2024-07-24 07:10:33.568779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.378 [2024-07-24 07:10:33.783985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.378 [2024-07-24 07:10:33.784034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.378 [2024-07-24 07:10:33.784049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.378 [2024-07-24 07:10:33.784076] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.378 [2024-07-24 07:10:33.784088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.378 [2024-07-24 07:10:33.784170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.378 [2024-07-24 07:10:33.784249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.378 [2024-07-24 07:10:33.784314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.378 [2024-07-24 07:10:33.784326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.635 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.635 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:21:19.635 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.635 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.635 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.635 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.635 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:21:19.635 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.635 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.893 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.893 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:21:19.893 "tick_rate": 2500000000, 00:21:19.893 "poll_groups": [ 00:21:19.893 { 00:21:19.893 "name": "nvmf_tgt_poll_group_000", 00:21:19.893 "admin_qpairs": 0, 00:21:19.893 "io_qpairs": 0, 00:21:19.893 "current_admin_qpairs": 0, 00:21:19.893 "current_io_qpairs": 0, 00:21:19.893 "pending_bdev_io": 0, 00:21:19.893 "completed_nvme_io": 0, 00:21:19.893 "transports": [] 00:21:19.893 }, 00:21:19.893 { 00:21:19.893 "name": "nvmf_tgt_poll_group_001", 00:21:19.893 "admin_qpairs": 0, 00:21:19.893 "io_qpairs": 0, 00:21:19.893 "current_admin_qpairs": 0, 00:21:19.893 "current_io_qpairs": 0, 00:21:19.893 "pending_bdev_io": 0, 00:21:19.894 "completed_nvme_io": 0, 00:21:19.894 "transports": [] 00:21:19.894 }, 00:21:19.894 { 00:21:19.894 "name": "nvmf_tgt_poll_group_002", 00:21:19.894 "admin_qpairs": 0, 00:21:19.894 "io_qpairs": 0, 00:21:19.894 "current_admin_qpairs": 0, 00:21:19.894 "current_io_qpairs": 0, 00:21:19.894 "pending_bdev_io": 0, 00:21:19.894 "completed_nvme_io": 0, 00:21:19.894 "transports": [] 00:21:19.894 }, 00:21:19.894 { 00:21:19.894 "name": "nvmf_tgt_poll_group_003", 00:21:19.894 "admin_qpairs": 0, 00:21:19.894 "io_qpairs": 0, 00:21:19.894 "current_admin_qpairs": 0, 00:21:19.894 "current_io_qpairs": 0, 00:21:19.894 "pending_bdev_io": 0, 00:21:19.894 "completed_nvme_io": 0, 00:21:19.894 "transports": [] 00:21:19.894 } 00:21:19.894 ] 00:21:19.894 }' 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.894 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.894 [2024-07-24 07:10:34.401974] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f6cc6845940) succeed. 00:21:19.894 [2024-07-24 07:10:34.411855] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f6cc6801940) succeed. 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:21:20.153 "tick_rate": 2500000000, 00:21:20.153 "poll_groups": [ 00:21:20.153 { 00:21:20.153 "name": "nvmf_tgt_poll_group_000", 00:21:20.153 "admin_qpairs": 0, 00:21:20.153 "io_qpairs": 0, 00:21:20.153 "current_admin_qpairs": 0, 00:21:20.153 "current_io_qpairs": 0, 00:21:20.153 "pending_bdev_io": 0, 00:21:20.153 "completed_nvme_io": 0, 00:21:20.153 "transports": [ 00:21:20.153 { 00:21:20.153 "trtype": "RDMA", 00:21:20.153 "pending_data_buffer": 0, 00:21:20.153 "devices": [ 00:21:20.153 { 00:21:20.153 "name": "mlx5_0", 00:21:20.153 "polls": 35769, 00:21:20.153 "idle_polls": 35769, 00:21:20.153 "completions": 0, 00:21:20.153 "requests": 0, 00:21:20.153 "request_latency": 0, 00:21:20.153 "pending_free_request": 0, 00:21:20.153 "pending_rdma_read": 0, 00:21:20.153 "pending_rdma_write": 0, 00:21:20.153 "pending_rdma_send": 0, 00:21:20.153 "total_send_wrs": 0, 00:21:20.153 "send_doorbell_updates": 0, 00:21:20.153 "total_recv_wrs": 4096, 00:21:20.153 "recv_doorbell_updates": 1 00:21:20.153 }, 00:21:20.153 { 00:21:20.153 "name": "mlx5_1", 00:21:20.153 "polls": 35769, 00:21:20.153 "idle_polls": 35769, 00:21:20.153 "completions": 0, 00:21:20.153 "requests": 0, 00:21:20.153 "request_latency": 0, 00:21:20.153 "pending_free_request": 0, 00:21:20.153 "pending_rdma_read": 0, 00:21:20.153 "pending_rdma_write": 0, 00:21:20.153 "pending_rdma_send": 0, 00:21:20.153 "total_send_wrs": 0, 00:21:20.153 "send_doorbell_updates": 0, 00:21:20.153 "total_recv_wrs": 4096, 00:21:20.153 "recv_doorbell_updates": 1 00:21:20.153 } 00:21:20.153 ] 00:21:20.153 } 00:21:20.153 ] 00:21:20.153 }, 00:21:20.153 { 00:21:20.153 "name": "nvmf_tgt_poll_group_001", 00:21:20.153 "admin_qpairs": 0, 00:21:20.153 "io_qpairs": 0, 00:21:20.153 "current_admin_qpairs": 0, 00:21:20.153 "current_io_qpairs": 0, 00:21:20.153 "pending_bdev_io": 0, 00:21:20.153 "completed_nvme_io": 0, 00:21:20.153 "transports": [ 00:21:20.153 { 00:21:20.153 "trtype": "RDMA", 00:21:20.153 "pending_data_buffer": 0, 00:21:20.153 "devices": [ 00:21:20.153 { 00:21:20.153 "name": "mlx5_0", 00:21:20.153 "polls": 22836, 00:21:20.153 "idle_polls": 22836, 00:21:20.153 "completions": 0, 00:21:20.153 "requests": 0, 00:21:20.153 "request_latency": 0, 00:21:20.153 "pending_free_request": 0, 00:21:20.153 "pending_rdma_read": 0, 00:21:20.153 "pending_rdma_write": 0, 00:21:20.153 "pending_rdma_send": 0, 00:21:20.153 "total_send_wrs": 0, 00:21:20.153 "send_doorbell_updates": 0, 00:21:20.153 "total_recv_wrs": 4096, 00:21:20.153 "recv_doorbell_updates": 1 00:21:20.153 }, 00:21:20.153 { 00:21:20.153 "name": "mlx5_1", 00:21:20.153 "polls": 22836, 00:21:20.153 "idle_polls": 22836, 00:21:20.153 "completions": 0, 00:21:20.153 "requests": 0, 00:21:20.153 "request_latency": 0, 00:21:20.153 "pending_free_request": 0, 00:21:20.153 "pending_rdma_read": 0, 00:21:20.153 "pending_rdma_write": 0, 00:21:20.153 "pending_rdma_send": 0, 00:21:20.153 "total_send_wrs": 0, 00:21:20.153 "send_doorbell_updates": 0, 00:21:20.153 "total_recv_wrs": 4096, 00:21:20.153 "recv_doorbell_updates": 1 00:21:20.153 } 00:21:20.153 ] 00:21:20.153 } 00:21:20.153 ] 00:21:20.153 }, 00:21:20.153 { 00:21:20.153 "name": "nvmf_tgt_poll_group_002", 00:21:20.153 "admin_qpairs": 0, 00:21:20.153 "io_qpairs": 0, 00:21:20.153 "current_admin_qpairs": 0, 00:21:20.153 "current_io_qpairs": 0, 00:21:20.153 "pending_bdev_io": 0, 00:21:20.153 "completed_nvme_io": 0, 00:21:20.153 "transports": [ 00:21:20.153 { 00:21:20.153 "trtype": "RDMA", 00:21:20.153 "pending_data_buffer": 0, 00:21:20.153 "devices": [ 00:21:20.153 { 00:21:20.153 "name": "mlx5_0", 00:21:20.153 "polls": 11144, 00:21:20.153 "idle_polls": 11144, 00:21:20.153 "completions": 0, 00:21:20.153 "requests": 0, 00:21:20.153 "request_latency": 0, 00:21:20.153 "pending_free_request": 0, 00:21:20.153 "pending_rdma_read": 0, 00:21:20.153 "pending_rdma_write": 0, 00:21:20.153 "pending_rdma_send": 0, 00:21:20.153 "total_send_wrs": 0, 00:21:20.153 "send_doorbell_updates": 0, 00:21:20.153 "total_recv_wrs": 4096, 00:21:20.153 "recv_doorbell_updates": 1 00:21:20.153 }, 00:21:20.153 { 00:21:20.153 "name": "mlx5_1", 00:21:20.153 "polls": 11144, 00:21:20.153 "idle_polls": 11144, 00:21:20.153 "completions": 0, 00:21:20.153 "requests": 0, 00:21:20.153 "request_latency": 0, 00:21:20.153 "pending_free_request": 0, 00:21:20.153 "pending_rdma_read": 0, 00:21:20.153 "pending_rdma_write": 0, 00:21:20.153 "pending_rdma_send": 0, 00:21:20.153 "total_send_wrs": 0, 00:21:20.153 "send_doorbell_updates": 0, 00:21:20.153 "total_recv_wrs": 4096, 00:21:20.153 "recv_doorbell_updates": 1 00:21:20.153 } 00:21:20.153 ] 00:21:20.153 } 00:21:20.153 ] 00:21:20.153 }, 00:21:20.153 { 00:21:20.153 "name": "nvmf_tgt_poll_group_003", 00:21:20.153 "admin_qpairs": 0, 00:21:20.153 "io_qpairs": 0, 00:21:20.153 "current_admin_qpairs": 0, 00:21:20.153 "current_io_qpairs": 0, 00:21:20.153 "pending_bdev_io": 0, 00:21:20.153 "completed_nvme_io": 0, 00:21:20.153 "transports": [ 00:21:20.153 { 00:21:20.153 "trtype": "RDMA", 00:21:20.153 "pending_data_buffer": 0, 00:21:20.153 "devices": [ 00:21:20.153 { 00:21:20.153 "name": "mlx5_0", 00:21:20.153 "polls": 776, 00:21:20.153 "idle_polls": 776, 00:21:20.153 "completions": 0, 00:21:20.153 "requests": 0, 00:21:20.153 "request_latency": 0, 00:21:20.153 "pending_free_request": 0, 00:21:20.153 "pending_rdma_read": 0, 00:21:20.153 "pending_rdma_write": 0, 00:21:20.153 "pending_rdma_send": 0, 00:21:20.153 "total_send_wrs": 0, 00:21:20.153 "send_doorbell_updates": 0, 00:21:20.153 "total_recv_wrs": 4096, 00:21:20.153 "recv_doorbell_updates": 1 00:21:20.153 }, 00:21:20.153 { 00:21:20.153 "name": "mlx5_1", 00:21:20.153 "polls": 776, 00:21:20.153 "idle_polls": 776, 00:21:20.153 "completions": 0, 00:21:20.153 "requests": 0, 00:21:20.153 "request_latency": 0, 00:21:20.153 "pending_free_request": 0, 00:21:20.153 "pending_rdma_read": 0, 00:21:20.153 "pending_rdma_write": 0, 00:21:20.153 "pending_rdma_send": 0, 00:21:20.153 "total_send_wrs": 0, 00:21:20.153 "send_doorbell_updates": 0, 00:21:20.153 "total_recv_wrs": 4096, 00:21:20.153 "recv_doorbell_updates": 1 00:21:20.153 } 00:21:20.153 ] 00:21:20.153 } 00:21:20.153 ] 00:21:20.153 } 00:21:20.153 ] 00:21:20.153 }' 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:21:20.153 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.412 07:10:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.412 Malloc1 00:21:20.412 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.412 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:20.412 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.412 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.671 [2024-07-24 07:10:35.079185] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:21:20.671 [2024-07-24 07:10:35.131327] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:21:20.671 Failed to write to /dev/nvme-fabrics: Input/output error 00:21:20.671 could not add new controller: failed to write to nvme-fabrics device 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.671 07:10:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:21.606 07:10:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:21:21.606 07:10:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:21:21.606 07:10:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:21:21.606 07:10:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:21:21.606 07:10:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:21:24.139 07:10:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:21:24.139 07:10:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:21:24.139 07:10:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:21:24.139 07:10:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:21:24.139 07:10:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:21:24.139 07:10:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:21:24.139 07:10:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:24.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:24.706 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:24.707 [2024-07-24 07:10:39.213427] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:21:24.707 Failed to write to /dev/nvme-fabrics: Input/output error 00:21:24.707 could not add new controller: failed to write to nvme-fabrics device 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.707 07:10:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:25.713 07:10:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:21:25.713 07:10:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:21:25.713 07:10:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:21:25.713 07:10:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:21:25.713 07:10:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:21:27.618 07:10:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:21:27.618 07:10:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:21:27.618 07:10:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:21:27.878 07:10:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:21:27.878 07:10:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:21:27.878 07:10:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:21:27.878 07:10:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:28.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.815 [2024-07-24 07:10:43.271754] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.815 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:28.816 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.816 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.816 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.816 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:28.816 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.816 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.816 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.816 07:10:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:29.753 07:10:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:29.753 07:10:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:21:29.753 07:10:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:21:29.753 07:10:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:21:29.753 07:10:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:21:31.659 07:10:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:21:31.659 07:10:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:21:31.659 07:10:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:21:31.659 07:10:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:21:31.659 07:10:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:21:31.659 07:10:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:21:31.659 07:10:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:32.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:32.596 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:32.596 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:21:32.596 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:21:32.596 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:32.855 [2024-07-24 07:10:47.284696] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.855 07:10:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:33.793 07:10:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:33.793 07:10:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:21:33.793 07:10:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:21:33.793 07:10:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:21:33.793 07:10:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:21:35.701 07:10:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:21:35.701 07:10:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:21:35.701 07:10:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:21:35.701 07:10:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:21:35.701 07:10:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:21:35.701 07:10:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:21:35.701 07:10:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:36.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:36.638 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:36.638 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:21:36.638 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:36.638 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:21:36.638 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:21:36.638 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:36.897 [2024-07-24 07:10:51.316438] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.897 07:10:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:37.834 07:10:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:37.834 07:10:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:21:37.834 07:10:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:21:37.834 07:10:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:21:37.834 07:10:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:21:39.736 07:10:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:21:39.736 07:10:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:21:39.736 07:10:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:21:39.736 07:10:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:21:39.737 07:10:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:21:39.737 07:10:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:21:39.737 07:10:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:40.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.673 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:40.932 [2024-07-24 07:10:55.329907] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.932 07:10:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:41.870 07:10:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:41.870 07:10:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:21:41.870 07:10:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:21:41.870 07:10:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:21:41.870 07:10:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:21:43.838 07:10:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:21:43.838 07:10:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:21:43.838 07:10:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:21:43.838 07:10:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:21:43.838 07:10:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:21:43.838 07:10:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:21:43.838 07:10:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:44.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:44.775 [2024-07-24 07:10:59.355886] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.775 07:10:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:46.154 07:11:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:46.154 07:11:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:21:46.154 07:11:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:21:46.154 07:11:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:21:46.154 07:11:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:21:48.057 07:11:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:21:48.057 07:11:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:21:48.057 07:11:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:21:48.057 07:11:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:21:48.057 07:11:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:21:48.057 07:11:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:21:48.057 07:11:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:48.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.992 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 [2024-07-24 07:11:03.405097] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 [2024-07-24 07:11:03.457267] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 [2024-07-24 07:11:03.513457] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 [2024-07-24 07:11:03.565686] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 [2024-07-24 07:11:03.617881] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:48.993 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:21:49.252 "tick_rate": 2500000000, 00:21:49.252 "poll_groups": [ 00:21:49.252 { 00:21:49.252 "name": "nvmf_tgt_poll_group_000", 00:21:49.252 "admin_qpairs": 2, 00:21:49.252 "io_qpairs": 27, 00:21:49.252 "current_admin_qpairs": 0, 00:21:49.252 "current_io_qpairs": 0, 00:21:49.252 "pending_bdev_io": 0, 00:21:49.252 "completed_nvme_io": 127, 00:21:49.252 "transports": [ 00:21:49.252 { 00:21:49.252 "trtype": "RDMA", 00:21:49.252 "pending_data_buffer": 0, 00:21:49.252 "devices": [ 00:21:49.252 { 00:21:49.252 "name": "mlx5_0", 00:21:49.252 "polls": 3321918, 00:21:49.252 "idle_polls": 3321595, 00:21:49.252 "completions": 363, 00:21:49.252 "requests": 181, 00:21:49.252 "request_latency": 45921308, 00:21:49.252 "pending_free_request": 0, 00:21:49.252 "pending_rdma_read": 0, 00:21:49.252 "pending_rdma_write": 0, 00:21:49.252 "pending_rdma_send": 0, 00:21:49.252 "total_send_wrs": 306, 00:21:49.252 "send_doorbell_updates": 157, 00:21:49.252 "total_recv_wrs": 4277, 00:21:49.252 "recv_doorbell_updates": 157 00:21:49.252 }, 00:21:49.252 { 00:21:49.252 "name": "mlx5_1", 00:21:49.252 "polls": 3321918, 00:21:49.252 "idle_polls": 3321918, 00:21:49.252 "completions": 0, 00:21:49.252 "requests": 0, 00:21:49.252 "request_latency": 0, 00:21:49.252 "pending_free_request": 0, 00:21:49.252 "pending_rdma_read": 0, 00:21:49.252 "pending_rdma_write": 0, 00:21:49.252 "pending_rdma_send": 0, 00:21:49.252 "total_send_wrs": 0, 00:21:49.252 "send_doorbell_updates": 0, 00:21:49.252 "total_recv_wrs": 4096, 00:21:49.252 "recv_doorbell_updates": 1 00:21:49.252 } 00:21:49.252 ] 00:21:49.252 } 00:21:49.252 ] 00:21:49.252 }, 00:21:49.252 { 00:21:49.252 "name": "nvmf_tgt_poll_group_001", 00:21:49.252 "admin_qpairs": 2, 00:21:49.252 "io_qpairs": 26, 00:21:49.252 "current_admin_qpairs": 0, 00:21:49.252 "current_io_qpairs": 0, 00:21:49.252 "pending_bdev_io": 0, 00:21:49.252 "completed_nvme_io": 76, 00:21:49.252 "transports": [ 00:21:49.252 { 00:21:49.252 "trtype": "RDMA", 00:21:49.252 "pending_data_buffer": 0, 00:21:49.252 "devices": [ 00:21:49.252 { 00:21:49.252 "name": "mlx5_0", 00:21:49.252 "polls": 3277683, 00:21:49.252 "idle_polls": 3277443, 00:21:49.252 "completions": 260, 00:21:49.252 "requests": 130, 00:21:49.252 "request_latency": 29495764, 00:21:49.252 "pending_free_request": 0, 00:21:49.252 "pending_rdma_read": 0, 00:21:49.252 "pending_rdma_write": 0, 00:21:49.252 "pending_rdma_send": 0, 00:21:49.252 "total_send_wrs": 205, 00:21:49.252 "send_doorbell_updates": 117, 00:21:49.252 "total_recv_wrs": 4226, 00:21:49.252 "recv_doorbell_updates": 118 00:21:49.252 }, 00:21:49.252 { 00:21:49.252 "name": "mlx5_1", 00:21:49.252 "polls": 3277683, 00:21:49.252 "idle_polls": 3277683, 00:21:49.252 "completions": 0, 00:21:49.252 "requests": 0, 00:21:49.252 "request_latency": 0, 00:21:49.252 "pending_free_request": 0, 00:21:49.252 "pending_rdma_read": 0, 00:21:49.252 "pending_rdma_write": 0, 00:21:49.252 "pending_rdma_send": 0, 00:21:49.252 "total_send_wrs": 0, 00:21:49.252 "send_doorbell_updates": 0, 00:21:49.252 "total_recv_wrs": 4096, 00:21:49.252 "recv_doorbell_updates": 1 00:21:49.252 } 00:21:49.252 ] 00:21:49.252 } 00:21:49.252 ] 00:21:49.252 }, 00:21:49.252 { 00:21:49.252 "name": "nvmf_tgt_poll_group_002", 00:21:49.252 "admin_qpairs": 1, 00:21:49.252 "io_qpairs": 26, 00:21:49.252 "current_admin_qpairs": 0, 00:21:49.252 "current_io_qpairs": 0, 00:21:49.252 "pending_bdev_io": 0, 00:21:49.252 "completed_nvme_io": 126, 00:21:49.252 "transports": [ 00:21:49.252 { 00:21:49.252 "trtype": "RDMA", 00:21:49.252 "pending_data_buffer": 0, 00:21:49.252 "devices": [ 00:21:49.252 { 00:21:49.252 "name": "mlx5_0", 00:21:49.252 "polls": 3290023, 00:21:49.252 "idle_polls": 3289757, 00:21:49.252 "completions": 307, 00:21:49.252 "requests": 153, 00:21:49.252 "request_latency": 42864512, 00:21:49.252 "pending_free_request": 0, 00:21:49.252 "pending_rdma_read": 0, 00:21:49.252 "pending_rdma_write": 0, 00:21:49.252 "pending_rdma_send": 0, 00:21:49.252 "total_send_wrs": 266, 00:21:49.252 "send_doorbell_updates": 128, 00:21:49.252 "total_recv_wrs": 4249, 00:21:49.252 "recv_doorbell_updates": 128 00:21:49.252 }, 00:21:49.252 { 00:21:49.252 "name": "mlx5_1", 00:21:49.252 "polls": 3290023, 00:21:49.252 "idle_polls": 3290023, 00:21:49.252 "completions": 0, 00:21:49.252 "requests": 0, 00:21:49.252 "request_latency": 0, 00:21:49.252 "pending_free_request": 0, 00:21:49.252 "pending_rdma_read": 0, 00:21:49.252 "pending_rdma_write": 0, 00:21:49.252 "pending_rdma_send": 0, 00:21:49.252 "total_send_wrs": 0, 00:21:49.252 "send_doorbell_updates": 0, 00:21:49.252 "total_recv_wrs": 4096, 00:21:49.252 "recv_doorbell_updates": 1 00:21:49.252 } 00:21:49.252 ] 00:21:49.252 } 00:21:49.252 ] 00:21:49.252 }, 00:21:49.252 { 00:21:49.252 "name": "nvmf_tgt_poll_group_003", 00:21:49.252 "admin_qpairs": 2, 00:21:49.252 "io_qpairs": 26, 00:21:49.252 "current_admin_qpairs": 0, 00:21:49.252 "current_io_qpairs": 0, 00:21:49.252 "pending_bdev_io": 0, 00:21:49.252 "completed_nvme_io": 126, 00:21:49.252 "transports": [ 00:21:49.252 { 00:21:49.252 "trtype": "RDMA", 00:21:49.252 "pending_data_buffer": 0, 00:21:49.252 "devices": [ 00:21:49.252 { 00:21:49.252 "name": "mlx5_0", 00:21:49.252 "polls": 2481616, 00:21:49.252 "idle_polls": 2481300, 00:21:49.252 "completions": 360, 00:21:49.252 "requests": 180, 00:21:49.252 "request_latency": 46492616, 00:21:49.252 "pending_free_request": 0, 00:21:49.252 "pending_rdma_read": 0, 00:21:49.252 "pending_rdma_write": 0, 00:21:49.252 "pending_rdma_send": 0, 00:21:49.252 "total_send_wrs": 304, 00:21:49.252 "send_doorbell_updates": 154, 00:21:49.252 "total_recv_wrs": 4276, 00:21:49.252 "recv_doorbell_updates": 155 00:21:49.252 }, 00:21:49.252 { 00:21:49.252 "name": "mlx5_1", 00:21:49.252 "polls": 2481616, 00:21:49.252 "idle_polls": 2481616, 00:21:49.252 "completions": 0, 00:21:49.252 "requests": 0, 00:21:49.252 "request_latency": 0, 00:21:49.252 "pending_free_request": 0, 00:21:49.252 "pending_rdma_read": 0, 00:21:49.252 "pending_rdma_write": 0, 00:21:49.252 "pending_rdma_send": 0, 00:21:49.252 "total_send_wrs": 0, 00:21:49.252 "send_doorbell_updates": 0, 00:21:49.252 "total_recv_wrs": 4096, 00:21:49.252 "recv_doorbell_updates": 1 00:21:49.252 } 00:21:49.252 ] 00:21:49.252 } 00:21:49.252 ] 00:21:49.252 } 00:21:49.252 ] 00:21:49.252 }' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:21:49.252 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 164774200 > 0 )) 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:49.511 rmmod nvme_rdma 00:21:49.511 rmmod nvme_fabrics 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1662140 ']' 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1662140 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1662140 ']' 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1662140 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.511 07:11:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1662140 00:21:49.511 07:11:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:49.511 07:11:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:49.511 07:11:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1662140' 00:21:49.511 killing process with pid 1662140 00:21:49.511 07:11:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1662140 00:21:49.511 07:11:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1662140 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:52.046 00:21:52.046 real 0m41.101s 00:21:52.046 user 2m8.402s 00:21:52.046 sys 0m8.296s 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:52.046 ************************************ 00:21:52.046 END TEST nvmf_rpc 00:21:52.046 ************************************ 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:52.046 ************************************ 00:21:52.046 START TEST nvmf_invalid 00:21:52.046 ************************************ 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:21:52.046 * Looking for test storage... 00:21:52.046 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.046 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.047 07:11:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:00.170 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:00.171 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:00.171 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:00.171 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:00.171 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:00.171 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:00.172 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:00.172 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:00.172 altname enp217s0f0np0 00:22:00.172 altname ens818f0np0 00:22:00.172 inet 192.168.100.8/24 scope global mlx_0_0 00:22:00.172 valid_lft forever preferred_lft forever 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:00.172 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:00.172 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:00.172 altname enp217s0f1np1 00:22:00.172 altname ens818f1np1 00:22:00.172 inet 192.168.100.9/24 scope global mlx_0_1 00:22:00.172 valid_lft forever preferred_lft forever 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:00.172 192.168.100.9' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:00.172 192.168.100.9' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:00.172 192.168.100.9' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1671808 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1671808 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1671808 ']' 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:00.172 07:11:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:00.172 [2024-07-24 07:11:14.761410] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:22:00.172 [2024-07-24 07:11:14.761504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.431 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.431 [2024-07-24 07:11:14.907527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.690 [2024-07-24 07:11:15.116187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.690 [2024-07-24 07:11:15.116230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.690 [2024-07-24 07:11:15.116244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.690 [2024-07-24 07:11:15.116255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.690 [2024-07-24 07:11:15.116266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.690 [2024-07-24 07:11:15.116390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.690 [2024-07-24 07:11:15.116493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.690 [2024-07-24 07:11:15.116554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.690 [2024-07-24 07:11:15.116566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.950 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.950 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:22:00.950 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.950 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.950 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:01.209 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.209 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:01.209 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15241 00:22:01.209 [2024-07-24 07:11:15.748246] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:22:01.209 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:22:01.209 { 00:22:01.209 "nqn": "nqn.2016-06.io.spdk:cnode15241", 00:22:01.209 "tgt_name": "foobar", 00:22:01.209 "method": "nvmf_create_subsystem", 00:22:01.209 "req_id": 1 00:22:01.209 } 00:22:01.209 Got JSON-RPC error response 00:22:01.209 response: 00:22:01.209 { 00:22:01.209 "code": -32603, 00:22:01.209 "message": "Unable to find target foobar" 00:22:01.209 }' 00:22:01.209 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:22:01.209 { 00:22:01.209 "nqn": "nqn.2016-06.io.spdk:cnode15241", 00:22:01.209 "tgt_name": "foobar", 00:22:01.209 "method": "nvmf_create_subsystem", 00:22:01.209 "req_id": 1 00:22:01.209 } 00:22:01.209 Got JSON-RPC error response 00:22:01.209 response: 00:22:01.209 { 00:22:01.209 "code": -32603, 00:22:01.209 "message": "Unable to find target foobar" 00:22:01.209 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:22:01.209 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:22:01.209 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21967 00:22:01.469 [2024-07-24 07:11:15.932889] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21967: invalid serial number 'SPDKISFASTANDAWESOME' 00:22:01.469 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:22:01.469 { 00:22:01.469 "nqn": "nqn.2016-06.io.spdk:cnode21967", 00:22:01.469 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:22:01.469 "method": "nvmf_create_subsystem", 00:22:01.469 "req_id": 1 00:22:01.469 } 00:22:01.469 Got JSON-RPC error response 00:22:01.469 response: 00:22:01.469 { 00:22:01.469 "code": -32602, 00:22:01.469 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:22:01.469 }' 00:22:01.469 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:22:01.469 { 00:22:01.469 "nqn": "nqn.2016-06.io.spdk:cnode21967", 00:22:01.469 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:22:01.469 "method": "nvmf_create_subsystem", 00:22:01.469 "req_id": 1 00:22:01.469 } 00:22:01.469 Got JSON-RPC error response 00:22:01.469 response: 00:22:01.469 { 00:22:01.469 "code": -32602, 00:22:01.469 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:22:01.469 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:22:01.469 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:22:01.469 07:11:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26812 00:22:01.728 [2024-07-24 07:11:16.117503] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26812: invalid model number 'SPDK_Controller' 00:22:01.728 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:22:01.728 { 00:22:01.728 "nqn": "nqn.2016-06.io.spdk:cnode26812", 00:22:01.728 "model_number": "SPDK_Controller\u001f", 00:22:01.728 "method": "nvmf_create_subsystem", 00:22:01.728 "req_id": 1 00:22:01.728 } 00:22:01.728 Got JSON-RPC error response 00:22:01.728 response: 00:22:01.728 { 00:22:01.728 "code": -32602, 00:22:01.728 "message": "Invalid MN SPDK_Controller\u001f" 00:22:01.728 }' 00:22:01.728 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:22:01.728 { 00:22:01.728 "nqn": "nqn.2016-06.io.spdk:cnode26812", 00:22:01.728 "model_number": "SPDK_Controller\u001f", 00:22:01.728 "method": "nvmf_create_subsystem", 00:22:01.728 "req_id": 1 00:22:01.728 } 00:22:01.728 Got JSON-RPC error response 00:22:01.728 response: 00:22:01.728 { 00:22:01.728 "code": -32602, 00:22:01.728 "message": "Invalid MN SPDK_Controller\u001f" 00:22:01.728 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:22:01.728 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:22:01.728 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.729 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.730 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:22:01.730 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:22:01.730 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:22:01.730 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.730 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.730 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:22:01.730 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '>*1z[-}l&c&P?khVHdncc' 00:22:01.730 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '>*1z[-}l&c&P?khVHdncc' nqn.2016-06.io.spdk:cnode418 00:22:01.990 [2024-07-24 07:11:16.470723] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode418: invalid serial number '>*1z[-}l&c&P?khVHdncc' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:22:01.990 { 00:22:01.990 "nqn": "nqn.2016-06.io.spdk:cnode418", 00:22:01.990 "serial_number": ">*1z[-}l&c&P?khVHdncc", 00:22:01.990 "method": "nvmf_create_subsystem", 00:22:01.990 "req_id": 1 00:22:01.990 } 00:22:01.990 Got JSON-RPC error response 00:22:01.990 response: 00:22:01.990 { 00:22:01.990 "code": -32602, 00:22:01.990 "message": "Invalid SN >*1z[-}l&c&P?khVHdncc" 00:22:01.990 }' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:22:01.990 { 00:22:01.990 "nqn": "nqn.2016-06.io.spdk:cnode418", 00:22:01.990 "serial_number": ">*1z[-}l&c&P?khVHdncc", 00:22:01.990 "method": "nvmf_create_subsystem", 00:22:01.990 "req_id": 1 00:22:01.990 } 00:22:01.990 Got JSON-RPC error response 00:22:01.990 response: 00:22:01.990 { 00:22:01.990 "code": -32602, 00:22:01.990 "message": "Invalid SN >*1z[-}l&c&P?khVHdncc" 00:22:01.990 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.990 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:01.991 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.251 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'S_K<8p?99WhdT{bj%32U}jI&CHE7Iqvpp_w7CHsR' 00:22:02.252 07:11:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'S_K<8p?99WhdT{bj%32U}jI&CHE7Iqvpp_w7CHsR' nqn.2016-06.io.spdk:cnode8940 00:22:02.511 [2024-07-24 07:11:16.988549] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8940: invalid model number 'S_K<8p?99WhdT{bj%32U}jI&CHE7Iqvpp_w7CHsR' 00:22:02.511 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:22:02.511 { 00:22:02.511 "nqn": "nqn.2016-06.io.spdk:cnode8940", 00:22:02.511 "model_number": "\u007fS_K<8p?99WhdT{bj%32U}jI&CHE7Iqvpp_w7CHsR", 00:22:02.511 "method": "nvmf_create_subsystem", 00:22:02.511 "req_id": 1 00:22:02.511 } 00:22:02.511 Got JSON-RPC error response 00:22:02.511 response: 00:22:02.511 { 00:22:02.511 "code": -32602, 00:22:02.511 "message": "Invalid MN \u007fS_K<8p?99WhdT{bj%32U}jI&CHE7Iqvpp_w7CHsR" 00:22:02.511 }' 00:22:02.511 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:22:02.511 { 00:22:02.511 "nqn": "nqn.2016-06.io.spdk:cnode8940", 00:22:02.511 "model_number": "\u007fS_K<8p?99WhdT{bj%32U}jI&CHE7Iqvpp_w7CHsR", 00:22:02.511 "method": "nvmf_create_subsystem", 00:22:02.511 "req_id": 1 00:22:02.511 } 00:22:02.511 Got JSON-RPC error response 00:22:02.511 response: 00:22:02.511 { 00:22:02.511 "code": -32602, 00:22:02.511 "message": "Invalid MN \u007fS_K<8p?99WhdT{bj%32U}jI&CHE7Iqvpp_w7CHsR" 00:22:02.511 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:22:02.511 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:22:02.771 [2024-07-24 07:11:17.206700] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f39feb93940) succeed. 00:22:02.771 [2024-07-24 07:11:17.216142] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f39feb4e940) succeed. 00:22:03.058 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:22:03.317 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:22:03.317 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:22:03.317 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:22:03.317 192.168.100.9' 00:22:03.317 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:22:03.317 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:22:03.317 [2024-07-24 07:11:17.905763] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:22:03.317 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:22:03.317 { 00:22:03.317 "nqn": "nqn.2016-06.io.spdk:cnode", 00:22:03.317 "listen_address": { 00:22:03.317 "trtype": "rdma", 00:22:03.317 "traddr": "192.168.100.8", 00:22:03.317 "trsvcid": "4421" 00:22:03.317 }, 00:22:03.317 "method": "nvmf_subsystem_remove_listener", 00:22:03.317 "req_id": 1 00:22:03.317 } 00:22:03.317 Got JSON-RPC error response 00:22:03.317 response: 00:22:03.317 { 00:22:03.317 "code": -32602, 00:22:03.317 "message": "Invalid parameters" 00:22:03.317 }' 00:22:03.317 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:22:03.317 { 00:22:03.317 "nqn": "nqn.2016-06.io.spdk:cnode", 00:22:03.317 "listen_address": { 00:22:03.318 "trtype": "rdma", 00:22:03.318 "traddr": "192.168.100.8", 00:22:03.318 "trsvcid": "4421" 00:22:03.318 }, 00:22:03.318 "method": "nvmf_subsystem_remove_listener", 00:22:03.318 "req_id": 1 00:22:03.318 } 00:22:03.318 Got JSON-RPC error response 00:22:03.318 response: 00:22:03.318 { 00:22:03.318 "code": -32602, 00:22:03.318 "message": "Invalid parameters" 00:22:03.318 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:22:03.318 07:11:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23476 -i 0 00:22:03.576 [2024-07-24 07:11:18.086429] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23476: invalid cntlid range [0-65519] 00:22:03.576 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:22:03.576 { 00:22:03.576 "nqn": "nqn.2016-06.io.spdk:cnode23476", 00:22:03.576 "min_cntlid": 0, 00:22:03.576 "method": "nvmf_create_subsystem", 00:22:03.576 "req_id": 1 00:22:03.576 } 00:22:03.576 Got JSON-RPC error response 00:22:03.576 response: 00:22:03.576 { 00:22:03.576 "code": -32602, 00:22:03.576 "message": "Invalid cntlid range [0-65519]" 00:22:03.576 }' 00:22:03.576 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:22:03.576 { 00:22:03.576 "nqn": "nqn.2016-06.io.spdk:cnode23476", 00:22:03.576 "min_cntlid": 0, 00:22:03.576 "method": "nvmf_create_subsystem", 00:22:03.576 "req_id": 1 00:22:03.576 } 00:22:03.576 Got JSON-RPC error response 00:22:03.576 response: 00:22:03.576 { 00:22:03.576 "code": -32602, 00:22:03.576 "message": "Invalid cntlid range [0-65519]" 00:22:03.576 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:03.576 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17221 -i 65520 00:22:03.835 [2024-07-24 07:11:18.271121] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17221: invalid cntlid range [65520-65519] 00:22:03.835 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:22:03.835 { 00:22:03.835 "nqn": "nqn.2016-06.io.spdk:cnode17221", 00:22:03.835 "min_cntlid": 65520, 00:22:03.835 "method": "nvmf_create_subsystem", 00:22:03.835 "req_id": 1 00:22:03.835 } 00:22:03.835 Got JSON-RPC error response 00:22:03.835 response: 00:22:03.835 { 00:22:03.835 "code": -32602, 00:22:03.835 "message": "Invalid cntlid range [65520-65519]" 00:22:03.835 }' 00:22:03.835 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:22:03.835 { 00:22:03.835 "nqn": "nqn.2016-06.io.spdk:cnode17221", 00:22:03.835 "min_cntlid": 65520, 00:22:03.835 "method": "nvmf_create_subsystem", 00:22:03.835 "req_id": 1 00:22:03.835 } 00:22:03.835 Got JSON-RPC error response 00:22:03.835 response: 00:22:03.835 { 00:22:03.836 "code": -32602, 00:22:03.836 "message": "Invalid cntlid range [65520-65519]" 00:22:03.836 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:03.836 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19190 -I 0 00:22:03.836 [2024-07-24 07:11:18.463872] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19190: invalid cntlid range [1-0] 00:22:04.093 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:22:04.093 { 00:22:04.093 "nqn": "nqn.2016-06.io.spdk:cnode19190", 00:22:04.093 "max_cntlid": 0, 00:22:04.093 "method": "nvmf_create_subsystem", 00:22:04.093 "req_id": 1 00:22:04.093 } 00:22:04.093 Got JSON-RPC error response 00:22:04.093 response: 00:22:04.093 { 00:22:04.093 "code": -32602, 00:22:04.093 "message": "Invalid cntlid range [1-0]" 00:22:04.093 }' 00:22:04.093 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:22:04.093 { 00:22:04.093 "nqn": "nqn.2016-06.io.spdk:cnode19190", 00:22:04.093 "max_cntlid": 0, 00:22:04.093 "method": "nvmf_create_subsystem", 00:22:04.093 "req_id": 1 00:22:04.093 } 00:22:04.093 Got JSON-RPC error response 00:22:04.093 response: 00:22:04.093 { 00:22:04.093 "code": -32602, 00:22:04.093 "message": "Invalid cntlid range [1-0]" 00:22:04.093 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:04.093 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11349 -I 65520 00:22:04.093 [2024-07-24 07:11:18.656599] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11349: invalid cntlid range [1-65520] 00:22:04.093 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:22:04.093 { 00:22:04.093 "nqn": "nqn.2016-06.io.spdk:cnode11349", 00:22:04.093 "max_cntlid": 65520, 00:22:04.093 "method": "nvmf_create_subsystem", 00:22:04.093 "req_id": 1 00:22:04.093 } 00:22:04.093 Got JSON-RPC error response 00:22:04.093 response: 00:22:04.093 { 00:22:04.093 "code": -32602, 00:22:04.093 "message": "Invalid cntlid range [1-65520]" 00:22:04.093 }' 00:22:04.093 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:22:04.093 { 00:22:04.093 "nqn": "nqn.2016-06.io.spdk:cnode11349", 00:22:04.093 "max_cntlid": 65520, 00:22:04.093 "method": "nvmf_create_subsystem", 00:22:04.093 "req_id": 1 00:22:04.093 } 00:22:04.093 Got JSON-RPC error response 00:22:04.093 response: 00:22:04.093 { 00:22:04.093 "code": -32602, 00:22:04.093 "message": "Invalid cntlid range [1-65520]" 00:22:04.093 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:04.093 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23365 -i 6 -I 5 00:22:04.352 [2024-07-24 07:11:18.849330] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23365: invalid cntlid range [6-5] 00:22:04.352 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:22:04.352 { 00:22:04.352 "nqn": "nqn.2016-06.io.spdk:cnode23365", 00:22:04.352 "min_cntlid": 6, 00:22:04.352 "max_cntlid": 5, 00:22:04.352 "method": "nvmf_create_subsystem", 00:22:04.352 "req_id": 1 00:22:04.352 } 00:22:04.352 Got JSON-RPC error response 00:22:04.352 response: 00:22:04.352 { 00:22:04.352 "code": -32602, 00:22:04.352 "message": "Invalid cntlid range [6-5]" 00:22:04.352 }' 00:22:04.352 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:22:04.352 { 00:22:04.352 "nqn": "nqn.2016-06.io.spdk:cnode23365", 00:22:04.352 "min_cntlid": 6, 00:22:04.352 "max_cntlid": 5, 00:22:04.352 "method": "nvmf_create_subsystem", 00:22:04.352 "req_id": 1 00:22:04.352 } 00:22:04.352 Got JSON-RPC error response 00:22:04.352 response: 00:22:04.352 { 00:22:04.352 "code": -32602, 00:22:04.352 "message": "Invalid cntlid range [6-5]" 00:22:04.352 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:04.352 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:22:04.611 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:22:04.611 { 00:22:04.611 "name": "foobar", 00:22:04.611 "method": "nvmf_delete_target", 00:22:04.611 "req_id": 1 00:22:04.611 } 00:22:04.611 Got JSON-RPC error response 00:22:04.611 response: 00:22:04.611 { 00:22:04.611 "code": -32602, 00:22:04.611 "message": "The specified target doesn'\''t exist, cannot delete it." 00:22:04.611 }' 00:22:04.611 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:22:04.611 { 00:22:04.611 "name": "foobar", 00:22:04.611 "method": "nvmf_delete_target", 00:22:04.611 "req_id": 1 00:22:04.611 } 00:22:04.611 Got JSON-RPC error response 00:22:04.611 response: 00:22:04.611 { 00:22:04.611 "code": -32602, 00:22:04.611 "message": "The specified target doesn't exist, cannot delete it." 00:22:04.611 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:22:04.611 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:22:04.611 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:22:04.611 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:04.611 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:22:04.611 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:04.611 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:04.611 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:22:04.612 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.612 07:11:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:04.612 rmmod nvme_rdma 00:22:04.612 rmmod nvme_fabrics 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1671808 ']' 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1671808 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1671808 ']' 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1671808 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1671808 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1671808' 00:22:04.612 killing process with pid 1671808 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1671808 00:22:04.612 07:11:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1671808 00:22:06.515 07:11:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.515 07:11:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:06.515 00:22:06.515 real 0m14.775s 00:22:06.515 user 0m25.866s 00:22:06.515 sys 0m7.865s 00:22:06.515 07:11:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:06.515 07:11:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:06.515 ************************************ 00:22:06.515 END TEST nvmf_invalid 00:22:06.515 ************************************ 00:22:06.515 07:11:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:22:06.515 07:11:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:06.515 07:11:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.515 07:11:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:06.515 ************************************ 00:22:06.515 START TEST nvmf_connect_stress 00:22:06.515 ************************************ 00:22:06.515 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:22:06.515 * Looking for test storage... 00:22:06.515 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:06.515 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.515 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:22:06.515 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.515 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.515 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.516 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.775 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:06.775 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:06.775 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.775 07:11:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.899 07:11:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:14.899 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:14.899 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:14.899 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:14.899 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:14.899 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:14.900 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:14.900 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:14.900 altname enp217s0f0np0 00:22:14.900 altname ens818f0np0 00:22:14.900 inet 192.168.100.8/24 scope global mlx_0_0 00:22:14.900 valid_lft forever preferred_lft forever 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:14.900 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:14.900 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:14.900 altname enp217s0f1np1 00:22:14.900 altname ens818f1np1 00:22:14.900 inet 192.168.100.9/24 scope global mlx_0_1 00:22:14.900 valid_lft forever preferred_lft forever 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:14.900 192.168.100.9' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:14.900 192.168.100.9' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:14.900 192.168.100.9' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.900 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1676952 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1676952 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1676952 ']' 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:14.901 07:11:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:14.901 [2024-07-24 07:11:29.349420] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:22:14.901 [2024-07-24 07:11:29.349508] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.901 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.901 [2024-07-24 07:11:29.497442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.160 [2024-07-24 07:11:29.720749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.160 [2024-07-24 07:11:29.720793] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.160 [2024-07-24 07:11:29.720810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.160 [2024-07-24 07:11:29.720822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.160 [2024-07-24 07:11:29.720834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.160 [2024-07-24 07:11:29.720895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.160 [2024-07-24 07:11:29.720955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.160 [2024-07-24 07:11:29.720964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.728 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.728 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:22:15.729 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.729 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.729 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:15.729 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.729 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:15.729 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.729 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:15.729 [2024-07-24 07:11:30.206168] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f3b362a0940) succeed. 00:22:15.729 [2024-07-24 07:11:30.216132] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f3b3625a940) succeed. 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:15.989 [2024-07-24 07:11:30.459667] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:15.989 NULL1 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1677231 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.989 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:16.557 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.557 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:16.557 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:16.557 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.557 07:11:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:16.816 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.816 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:16.816 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:16.816 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.816 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:17.076 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.076 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:17.076 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:17.076 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.076 07:11:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:17.644 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.644 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:17.644 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:17.644 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.644 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:17.903 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.903 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:17.903 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:17.903 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.903 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:18.162 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.162 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:18.162 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:18.162 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.162 07:11:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:18.731 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.731 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:18.731 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:18.731 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.731 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:18.989 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.989 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:18.989 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:18.989 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.989 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:19.248 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.248 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:19.248 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:19.248 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.248 07:11:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:19.816 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.816 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:19.816 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:19.816 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.816 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:20.075 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.075 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:20.075 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:20.075 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.075 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:20.335 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.335 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:20.335 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:20.335 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.335 07:11:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:20.903 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.903 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:20.903 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:20.903 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.903 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:21.162 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.162 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:21.162 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:21.162 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.162 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:21.421 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.421 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:21.421 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:21.421 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.421 07:11:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:22.018 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.018 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:22.018 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:22.018 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.018 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:22.285 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.285 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:22.285 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:22.285 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.285 07:11:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:22.544 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.544 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:22.544 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:22.544 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.544 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:22.804 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.804 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:22.804 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:22.804 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.804 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:23.371 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.371 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:23.371 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:23.371 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.371 07:11:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:23.630 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.630 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:23.630 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:23.630 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.630 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:23.888 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.888 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:23.888 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:23.888 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.888 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:24.456 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.456 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:24.456 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:24.456 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.456 07:11:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:24.715 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.715 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:24.715 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:24.715 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.715 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:24.973 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.973 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:24.973 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:24.973 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.973 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:25.541 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.541 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:25.541 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:25.541 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.541 07:11:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:25.800 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.800 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:25.800 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:25.800 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.800 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:26.058 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.058 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:26.058 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:26.058 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.058 07:11:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:26.317 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1677231 00:22:26.576 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1677231) - No such process 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1677231 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:26.576 rmmod nvme_rdma 00:22:26.576 rmmod nvme_fabrics 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1676952 ']' 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1676952 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1676952 ']' 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1676952 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1676952 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1676952' 00:22:26.576 killing process with pid 1676952 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1676952 00:22:26.576 07:11:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1676952 00:22:28.480 07:11:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.480 07:11:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:28.480 00:22:28.480 real 0m21.823s 00:22:28.480 user 0m44.505s 00:22:28.480 sys 0m10.258s 00:22:28.480 07:11:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:28.480 07:11:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:28.480 ************************************ 00:22:28.480 END TEST nvmf_connect_stress 00:22:28.480 ************************************ 00:22:28.480 07:11:42 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:22:28.480 07:11:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:28.480 07:11:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:28.480 07:11:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:28.480 ************************************ 00:22:28.480 START TEST nvmf_fused_ordering 00:22:28.480 ************************************ 00:22:28.481 07:11:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:22:28.481 * Looking for test storage... 00:22:28.481 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.481 07:11:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.604 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:36.605 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:36.605 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:36.605 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:36.605 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:36.605 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.605 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:36.605 altname enp217s0f0np0 00:22:36.605 altname ens818f0np0 00:22:36.605 inet 192.168.100.8/24 scope global mlx_0_0 00:22:36.605 valid_lft forever preferred_lft forever 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:36.605 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:36.606 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.606 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:36.606 altname enp217s0f1np1 00:22:36.606 altname ens818f1np1 00:22:36.606 inet 192.168.100.9/24 scope global mlx_0_1 00:22:36.606 valid_lft forever preferred_lft forever 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:36.606 192.168.100.9' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:36.606 192.168.100.9' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:36.606 192.168.100.9' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1683272 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1683272 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1683272 ']' 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.606 07:11:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:36.606 [2024-07-24 07:11:50.976079] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:22:36.606 [2024-07-24 07:11:50.976176] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.606 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.606 [2024-07-24 07:11:51.123554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.865 [2024-07-24 07:11:51.317680] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.865 [2024-07-24 07:11:51.317722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.865 [2024-07-24 07:11:51.317736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.865 [2024-07-24 07:11:51.317766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.865 [2024-07-24 07:11:51.317778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.865 [2024-07-24 07:11:51.317815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.125 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.125 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:22:37.125 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.125 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.125 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:37.384 [2024-07-24 07:11:51.813725] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7fd5343bd940) succeed. 00:22:37.384 [2024-07-24 07:11:51.823251] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7fd534379940) succeed. 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.384 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:37.385 [2024-07-24 07:11:51.940918] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:37.385 NULL1 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.385 07:11:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:37.645 [2024-07-24 07:11:52.021850] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:22:37.645 [2024-07-24 07:11:52.021921] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683399 ] 00:22:37.645 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.904 Attached to nqn.2016-06.io.spdk:cnode1 00:22:37.904 Namespace ID: 1 size: 1GB 00:22:37.904 fused_ordering(0) 00:22:37.904 fused_ordering(1) 00:22:37.904 fused_ordering(2) 00:22:37.904 fused_ordering(3) 00:22:37.904 fused_ordering(4) 00:22:37.904 fused_ordering(5) 00:22:37.904 fused_ordering(6) 00:22:37.904 fused_ordering(7) 00:22:37.904 fused_ordering(8) 00:22:37.904 fused_ordering(9) 00:22:37.904 fused_ordering(10) 00:22:37.904 fused_ordering(11) 00:22:37.904 fused_ordering(12) 00:22:37.904 fused_ordering(13) 00:22:37.904 fused_ordering(14) 00:22:37.904 fused_ordering(15) 00:22:37.904 fused_ordering(16) 00:22:37.904 fused_ordering(17) 00:22:37.904 fused_ordering(18) 00:22:37.904 fused_ordering(19) 00:22:37.904 fused_ordering(20) 00:22:37.904 fused_ordering(21) 00:22:37.904 fused_ordering(22) 00:22:37.904 fused_ordering(23) 00:22:37.904 fused_ordering(24) 00:22:37.904 fused_ordering(25) 00:22:37.904 fused_ordering(26) 00:22:37.904 fused_ordering(27) 00:22:37.904 fused_ordering(28) 00:22:37.904 fused_ordering(29) 00:22:37.904 fused_ordering(30) 00:22:37.904 fused_ordering(31) 00:22:37.904 fused_ordering(32) 00:22:37.904 fused_ordering(33) 00:22:37.904 fused_ordering(34) 00:22:37.904 fused_ordering(35) 00:22:37.904 fused_ordering(36) 00:22:37.904 fused_ordering(37) 00:22:37.904 fused_ordering(38) 00:22:37.904 fused_ordering(39) 00:22:37.904 fused_ordering(40) 00:22:37.904 fused_ordering(41) 00:22:37.904 fused_ordering(42) 00:22:37.904 fused_ordering(43) 00:22:37.904 fused_ordering(44) 00:22:37.904 fused_ordering(45) 00:22:37.904 fused_ordering(46) 00:22:37.904 fused_ordering(47) 00:22:37.904 fused_ordering(48) 00:22:37.904 fused_ordering(49) 00:22:37.904 fused_ordering(50) 00:22:37.904 fused_ordering(51) 00:22:37.904 fused_ordering(52) 00:22:37.904 fused_ordering(53) 00:22:37.904 fused_ordering(54) 00:22:37.904 fused_ordering(55) 00:22:37.904 fused_ordering(56) 00:22:37.904 fused_ordering(57) 00:22:37.904 fused_ordering(58) 00:22:37.904 fused_ordering(59) 00:22:37.904 fused_ordering(60) 00:22:37.904 fused_ordering(61) 00:22:37.904 fused_ordering(62) 00:22:37.904 fused_ordering(63) 00:22:37.905 fused_ordering(64) 00:22:37.905 fused_ordering(65) 00:22:37.905 fused_ordering(66) 00:22:37.905 fused_ordering(67) 00:22:37.905 fused_ordering(68) 00:22:37.905 fused_ordering(69) 00:22:37.905 fused_ordering(70) 00:22:37.905 fused_ordering(71) 00:22:37.905 fused_ordering(72) 00:22:37.905 fused_ordering(73) 00:22:37.905 fused_ordering(74) 00:22:37.905 fused_ordering(75) 00:22:37.905 fused_ordering(76) 00:22:37.905 fused_ordering(77) 00:22:37.905 fused_ordering(78) 00:22:37.905 fused_ordering(79) 00:22:37.905 fused_ordering(80) 00:22:37.905 fused_ordering(81) 00:22:37.905 fused_ordering(82) 00:22:37.905 fused_ordering(83) 00:22:37.905 fused_ordering(84) 00:22:37.905 fused_ordering(85) 00:22:37.905 fused_ordering(86) 00:22:37.905 fused_ordering(87) 00:22:37.905 fused_ordering(88) 00:22:37.905 fused_ordering(89) 00:22:37.905 fused_ordering(90) 00:22:37.905 fused_ordering(91) 00:22:37.905 fused_ordering(92) 00:22:37.905 fused_ordering(93) 00:22:37.905 fused_ordering(94) 00:22:37.905 fused_ordering(95) 00:22:37.905 fused_ordering(96) 00:22:37.905 fused_ordering(97) 00:22:37.905 fused_ordering(98) 00:22:37.905 fused_ordering(99) 00:22:37.905 fused_ordering(100) 00:22:37.905 fused_ordering(101) 00:22:37.905 fused_ordering(102) 00:22:37.905 fused_ordering(103) 00:22:37.905 fused_ordering(104) 00:22:37.905 fused_ordering(105) 00:22:37.905 fused_ordering(106) 00:22:37.905 fused_ordering(107) 00:22:37.905 fused_ordering(108) 00:22:37.905 fused_ordering(109) 00:22:37.905 fused_ordering(110) 00:22:37.905 fused_ordering(111) 00:22:37.905 fused_ordering(112) 00:22:37.905 fused_ordering(113) 00:22:37.905 fused_ordering(114) 00:22:37.905 fused_ordering(115) 00:22:37.905 fused_ordering(116) 00:22:37.905 fused_ordering(117) 00:22:37.905 fused_ordering(118) 00:22:37.905 fused_ordering(119) 00:22:37.905 fused_ordering(120) 00:22:37.905 fused_ordering(121) 00:22:37.905 fused_ordering(122) 00:22:37.905 fused_ordering(123) 00:22:37.905 fused_ordering(124) 00:22:37.905 fused_ordering(125) 00:22:37.905 fused_ordering(126) 00:22:37.905 fused_ordering(127) 00:22:37.905 fused_ordering(128) 00:22:37.905 fused_ordering(129) 00:22:37.905 fused_ordering(130) 00:22:37.905 fused_ordering(131) 00:22:37.905 fused_ordering(132) 00:22:37.905 fused_ordering(133) 00:22:37.905 fused_ordering(134) 00:22:37.905 fused_ordering(135) 00:22:37.905 fused_ordering(136) 00:22:37.905 fused_ordering(137) 00:22:37.905 fused_ordering(138) 00:22:37.905 fused_ordering(139) 00:22:37.905 fused_ordering(140) 00:22:37.905 fused_ordering(141) 00:22:37.905 fused_ordering(142) 00:22:37.905 fused_ordering(143) 00:22:37.905 fused_ordering(144) 00:22:37.905 fused_ordering(145) 00:22:37.905 fused_ordering(146) 00:22:37.905 fused_ordering(147) 00:22:37.905 fused_ordering(148) 00:22:37.905 fused_ordering(149) 00:22:37.905 fused_ordering(150) 00:22:37.905 fused_ordering(151) 00:22:37.905 fused_ordering(152) 00:22:37.905 fused_ordering(153) 00:22:37.905 fused_ordering(154) 00:22:37.905 fused_ordering(155) 00:22:37.905 fused_ordering(156) 00:22:37.905 fused_ordering(157) 00:22:37.905 fused_ordering(158) 00:22:37.905 fused_ordering(159) 00:22:37.905 fused_ordering(160) 00:22:37.905 fused_ordering(161) 00:22:37.905 fused_ordering(162) 00:22:37.905 fused_ordering(163) 00:22:37.905 fused_ordering(164) 00:22:37.905 fused_ordering(165) 00:22:37.905 fused_ordering(166) 00:22:37.905 fused_ordering(167) 00:22:37.905 fused_ordering(168) 00:22:37.905 fused_ordering(169) 00:22:37.905 fused_ordering(170) 00:22:37.905 fused_ordering(171) 00:22:37.905 fused_ordering(172) 00:22:37.905 fused_ordering(173) 00:22:37.905 fused_ordering(174) 00:22:37.905 fused_ordering(175) 00:22:37.905 fused_ordering(176) 00:22:37.905 fused_ordering(177) 00:22:37.905 fused_ordering(178) 00:22:37.905 fused_ordering(179) 00:22:37.905 fused_ordering(180) 00:22:37.905 fused_ordering(181) 00:22:37.905 fused_ordering(182) 00:22:37.905 fused_ordering(183) 00:22:37.905 fused_ordering(184) 00:22:37.905 fused_ordering(185) 00:22:37.905 fused_ordering(186) 00:22:37.905 fused_ordering(187) 00:22:37.905 fused_ordering(188) 00:22:37.905 fused_ordering(189) 00:22:37.905 fused_ordering(190) 00:22:37.905 fused_ordering(191) 00:22:37.905 fused_ordering(192) 00:22:37.905 fused_ordering(193) 00:22:37.905 fused_ordering(194) 00:22:37.905 fused_ordering(195) 00:22:37.905 fused_ordering(196) 00:22:37.905 fused_ordering(197) 00:22:37.905 fused_ordering(198) 00:22:37.905 fused_ordering(199) 00:22:37.905 fused_ordering(200) 00:22:37.905 fused_ordering(201) 00:22:37.905 fused_ordering(202) 00:22:37.905 fused_ordering(203) 00:22:37.905 fused_ordering(204) 00:22:37.905 fused_ordering(205) 00:22:37.905 fused_ordering(206) 00:22:37.905 fused_ordering(207) 00:22:37.905 fused_ordering(208) 00:22:37.905 fused_ordering(209) 00:22:37.905 fused_ordering(210) 00:22:37.905 fused_ordering(211) 00:22:37.905 fused_ordering(212) 00:22:37.905 fused_ordering(213) 00:22:37.905 fused_ordering(214) 00:22:37.905 fused_ordering(215) 00:22:37.905 fused_ordering(216) 00:22:37.905 fused_ordering(217) 00:22:37.905 fused_ordering(218) 00:22:37.905 fused_ordering(219) 00:22:37.905 fused_ordering(220) 00:22:37.905 fused_ordering(221) 00:22:37.905 fused_ordering(222) 00:22:37.905 fused_ordering(223) 00:22:37.905 fused_ordering(224) 00:22:37.905 fused_ordering(225) 00:22:37.905 fused_ordering(226) 00:22:37.905 fused_ordering(227) 00:22:37.905 fused_ordering(228) 00:22:37.905 fused_ordering(229) 00:22:37.905 fused_ordering(230) 00:22:37.905 fused_ordering(231) 00:22:37.905 fused_ordering(232) 00:22:37.905 fused_ordering(233) 00:22:37.905 fused_ordering(234) 00:22:37.905 fused_ordering(235) 00:22:37.905 fused_ordering(236) 00:22:37.905 fused_ordering(237) 00:22:37.905 fused_ordering(238) 00:22:37.905 fused_ordering(239) 00:22:37.905 fused_ordering(240) 00:22:37.905 fused_ordering(241) 00:22:37.905 fused_ordering(242) 00:22:37.905 fused_ordering(243) 00:22:37.905 fused_ordering(244) 00:22:37.905 fused_ordering(245) 00:22:37.905 fused_ordering(246) 00:22:37.905 fused_ordering(247) 00:22:37.905 fused_ordering(248) 00:22:37.905 fused_ordering(249) 00:22:37.905 fused_ordering(250) 00:22:37.905 fused_ordering(251) 00:22:37.905 fused_ordering(252) 00:22:37.905 fused_ordering(253) 00:22:37.905 fused_ordering(254) 00:22:37.905 fused_ordering(255) 00:22:37.905 fused_ordering(256) 00:22:37.905 fused_ordering(257) 00:22:37.905 fused_ordering(258) 00:22:37.905 fused_ordering(259) 00:22:37.905 fused_ordering(260) 00:22:37.905 fused_ordering(261) 00:22:37.905 fused_ordering(262) 00:22:37.905 fused_ordering(263) 00:22:37.905 fused_ordering(264) 00:22:37.905 fused_ordering(265) 00:22:37.905 fused_ordering(266) 00:22:37.905 fused_ordering(267) 00:22:37.905 fused_ordering(268) 00:22:37.905 fused_ordering(269) 00:22:37.905 fused_ordering(270) 00:22:37.905 fused_ordering(271) 00:22:37.905 fused_ordering(272) 00:22:37.905 fused_ordering(273) 00:22:37.905 fused_ordering(274) 00:22:37.905 fused_ordering(275) 00:22:37.905 fused_ordering(276) 00:22:37.905 fused_ordering(277) 00:22:37.905 fused_ordering(278) 00:22:37.905 fused_ordering(279) 00:22:37.905 fused_ordering(280) 00:22:37.905 fused_ordering(281) 00:22:37.905 fused_ordering(282) 00:22:37.905 fused_ordering(283) 00:22:37.905 fused_ordering(284) 00:22:37.905 fused_ordering(285) 00:22:37.905 fused_ordering(286) 00:22:37.905 fused_ordering(287) 00:22:37.905 fused_ordering(288) 00:22:37.905 fused_ordering(289) 00:22:37.905 fused_ordering(290) 00:22:37.905 fused_ordering(291) 00:22:37.905 fused_ordering(292) 00:22:37.905 fused_ordering(293) 00:22:37.905 fused_ordering(294) 00:22:37.905 fused_ordering(295) 00:22:37.905 fused_ordering(296) 00:22:37.905 fused_ordering(297) 00:22:37.905 fused_ordering(298) 00:22:37.905 fused_ordering(299) 00:22:37.905 fused_ordering(300) 00:22:37.905 fused_ordering(301) 00:22:37.905 fused_ordering(302) 00:22:37.905 fused_ordering(303) 00:22:37.905 fused_ordering(304) 00:22:37.905 fused_ordering(305) 00:22:37.905 fused_ordering(306) 00:22:37.905 fused_ordering(307) 00:22:37.905 fused_ordering(308) 00:22:37.905 fused_ordering(309) 00:22:37.905 fused_ordering(310) 00:22:37.905 fused_ordering(311) 00:22:37.905 fused_ordering(312) 00:22:37.905 fused_ordering(313) 00:22:37.905 fused_ordering(314) 00:22:37.905 fused_ordering(315) 00:22:37.905 fused_ordering(316) 00:22:37.905 fused_ordering(317) 00:22:37.905 fused_ordering(318) 00:22:37.905 fused_ordering(319) 00:22:37.905 fused_ordering(320) 00:22:37.906 fused_ordering(321) 00:22:37.906 fused_ordering(322) 00:22:37.906 fused_ordering(323) 00:22:37.906 fused_ordering(324) 00:22:37.906 fused_ordering(325) 00:22:37.906 fused_ordering(326) 00:22:37.906 fused_ordering(327) 00:22:37.906 fused_ordering(328) 00:22:37.906 fused_ordering(329) 00:22:37.906 fused_ordering(330) 00:22:37.906 fused_ordering(331) 00:22:37.906 fused_ordering(332) 00:22:37.906 fused_ordering(333) 00:22:37.906 fused_ordering(334) 00:22:37.906 fused_ordering(335) 00:22:37.906 fused_ordering(336) 00:22:37.906 fused_ordering(337) 00:22:37.906 fused_ordering(338) 00:22:37.906 fused_ordering(339) 00:22:37.906 fused_ordering(340) 00:22:37.906 fused_ordering(341) 00:22:37.906 fused_ordering(342) 00:22:37.906 fused_ordering(343) 00:22:37.906 fused_ordering(344) 00:22:37.906 fused_ordering(345) 00:22:37.906 fused_ordering(346) 00:22:37.906 fused_ordering(347) 00:22:37.906 fused_ordering(348) 00:22:37.906 fused_ordering(349) 00:22:37.906 fused_ordering(350) 00:22:37.906 fused_ordering(351) 00:22:37.906 fused_ordering(352) 00:22:37.906 fused_ordering(353) 00:22:37.906 fused_ordering(354) 00:22:37.906 fused_ordering(355) 00:22:37.906 fused_ordering(356) 00:22:37.906 fused_ordering(357) 00:22:37.906 fused_ordering(358) 00:22:37.906 fused_ordering(359) 00:22:37.906 fused_ordering(360) 00:22:37.906 fused_ordering(361) 00:22:37.906 fused_ordering(362) 00:22:37.906 fused_ordering(363) 00:22:37.906 fused_ordering(364) 00:22:37.906 fused_ordering(365) 00:22:37.906 fused_ordering(366) 00:22:37.906 fused_ordering(367) 00:22:37.906 fused_ordering(368) 00:22:37.906 fused_ordering(369) 00:22:37.906 fused_ordering(370) 00:22:37.906 fused_ordering(371) 00:22:37.906 fused_ordering(372) 00:22:37.906 fused_ordering(373) 00:22:37.906 fused_ordering(374) 00:22:37.906 fused_ordering(375) 00:22:37.906 fused_ordering(376) 00:22:37.906 fused_ordering(377) 00:22:37.906 fused_ordering(378) 00:22:37.906 fused_ordering(379) 00:22:37.906 fused_ordering(380) 00:22:37.906 fused_ordering(381) 00:22:37.906 fused_ordering(382) 00:22:37.906 fused_ordering(383) 00:22:37.906 fused_ordering(384) 00:22:37.906 fused_ordering(385) 00:22:37.906 fused_ordering(386) 00:22:37.906 fused_ordering(387) 00:22:37.906 fused_ordering(388) 00:22:37.906 fused_ordering(389) 00:22:37.906 fused_ordering(390) 00:22:37.906 fused_ordering(391) 00:22:37.906 fused_ordering(392) 00:22:37.906 fused_ordering(393) 00:22:37.906 fused_ordering(394) 00:22:37.906 fused_ordering(395) 00:22:37.906 fused_ordering(396) 00:22:37.906 fused_ordering(397) 00:22:37.906 fused_ordering(398) 00:22:37.906 fused_ordering(399) 00:22:37.906 fused_ordering(400) 00:22:37.906 fused_ordering(401) 00:22:37.906 fused_ordering(402) 00:22:37.906 fused_ordering(403) 00:22:37.906 fused_ordering(404) 00:22:37.906 fused_ordering(405) 00:22:37.906 fused_ordering(406) 00:22:37.906 fused_ordering(407) 00:22:37.906 fused_ordering(408) 00:22:37.906 fused_ordering(409) 00:22:37.906 fused_ordering(410) 00:22:37.906 fused_ordering(411) 00:22:37.906 fused_ordering(412) 00:22:37.906 fused_ordering(413) 00:22:37.906 fused_ordering(414) 00:22:37.906 fused_ordering(415) 00:22:37.906 fused_ordering(416) 00:22:37.906 fused_ordering(417) 00:22:37.906 fused_ordering(418) 00:22:37.906 fused_ordering(419) 00:22:37.906 fused_ordering(420) 00:22:37.906 fused_ordering(421) 00:22:37.906 fused_ordering(422) 00:22:37.906 fused_ordering(423) 00:22:37.906 fused_ordering(424) 00:22:37.906 fused_ordering(425) 00:22:37.906 fused_ordering(426) 00:22:37.906 fused_ordering(427) 00:22:37.906 fused_ordering(428) 00:22:37.906 fused_ordering(429) 00:22:37.906 fused_ordering(430) 00:22:37.906 fused_ordering(431) 00:22:37.906 fused_ordering(432) 00:22:37.906 fused_ordering(433) 00:22:37.906 fused_ordering(434) 00:22:37.906 fused_ordering(435) 00:22:37.906 fused_ordering(436) 00:22:37.906 fused_ordering(437) 00:22:37.906 fused_ordering(438) 00:22:37.906 fused_ordering(439) 00:22:37.906 fused_ordering(440) 00:22:37.906 fused_ordering(441) 00:22:37.906 fused_ordering(442) 00:22:37.906 fused_ordering(443) 00:22:37.906 fused_ordering(444) 00:22:37.906 fused_ordering(445) 00:22:37.906 fused_ordering(446) 00:22:37.906 fused_ordering(447) 00:22:37.906 fused_ordering(448) 00:22:37.906 fused_ordering(449) 00:22:37.906 fused_ordering(450) 00:22:37.906 fused_ordering(451) 00:22:37.906 fused_ordering(452) 00:22:37.906 fused_ordering(453) 00:22:37.906 fused_ordering(454) 00:22:37.906 fused_ordering(455) 00:22:37.906 fused_ordering(456) 00:22:37.906 fused_ordering(457) 00:22:37.906 fused_ordering(458) 00:22:37.906 fused_ordering(459) 00:22:37.906 fused_ordering(460) 00:22:37.906 fused_ordering(461) 00:22:37.906 fused_ordering(462) 00:22:37.906 fused_ordering(463) 00:22:37.906 fused_ordering(464) 00:22:37.906 fused_ordering(465) 00:22:37.906 fused_ordering(466) 00:22:37.906 fused_ordering(467) 00:22:37.906 fused_ordering(468) 00:22:37.906 fused_ordering(469) 00:22:37.906 fused_ordering(470) 00:22:37.906 fused_ordering(471) 00:22:37.906 fused_ordering(472) 00:22:37.906 fused_ordering(473) 00:22:37.906 fused_ordering(474) 00:22:37.906 fused_ordering(475) 00:22:37.906 fused_ordering(476) 00:22:37.906 fused_ordering(477) 00:22:37.906 fused_ordering(478) 00:22:37.906 fused_ordering(479) 00:22:37.906 fused_ordering(480) 00:22:37.906 fused_ordering(481) 00:22:37.906 fused_ordering(482) 00:22:37.906 fused_ordering(483) 00:22:37.906 fused_ordering(484) 00:22:37.906 fused_ordering(485) 00:22:37.906 fused_ordering(486) 00:22:37.906 fused_ordering(487) 00:22:37.906 fused_ordering(488) 00:22:37.906 fused_ordering(489) 00:22:37.906 fused_ordering(490) 00:22:37.906 fused_ordering(491) 00:22:37.906 fused_ordering(492) 00:22:37.906 fused_ordering(493) 00:22:37.906 fused_ordering(494) 00:22:37.906 fused_ordering(495) 00:22:37.906 fused_ordering(496) 00:22:37.906 fused_ordering(497) 00:22:37.906 fused_ordering(498) 00:22:37.906 fused_ordering(499) 00:22:37.906 fused_ordering(500) 00:22:37.906 fused_ordering(501) 00:22:37.906 fused_ordering(502) 00:22:37.906 fused_ordering(503) 00:22:37.906 fused_ordering(504) 00:22:37.906 fused_ordering(505) 00:22:37.906 fused_ordering(506) 00:22:37.906 fused_ordering(507) 00:22:37.906 fused_ordering(508) 00:22:37.906 fused_ordering(509) 00:22:37.906 fused_ordering(510) 00:22:37.906 fused_ordering(511) 00:22:37.906 fused_ordering(512) 00:22:37.906 fused_ordering(513) 00:22:37.906 fused_ordering(514) 00:22:37.906 fused_ordering(515) 00:22:37.906 fused_ordering(516) 00:22:37.906 fused_ordering(517) 00:22:37.906 fused_ordering(518) 00:22:37.906 fused_ordering(519) 00:22:37.906 fused_ordering(520) 00:22:37.906 fused_ordering(521) 00:22:37.906 fused_ordering(522) 00:22:37.906 fused_ordering(523) 00:22:37.906 fused_ordering(524) 00:22:37.906 fused_ordering(525) 00:22:37.906 fused_ordering(526) 00:22:37.906 fused_ordering(527) 00:22:37.906 fused_ordering(528) 00:22:37.906 fused_ordering(529) 00:22:37.906 fused_ordering(530) 00:22:37.906 fused_ordering(531) 00:22:37.906 fused_ordering(532) 00:22:37.906 fused_ordering(533) 00:22:37.906 fused_ordering(534) 00:22:37.906 fused_ordering(535) 00:22:37.906 fused_ordering(536) 00:22:37.906 fused_ordering(537) 00:22:37.906 fused_ordering(538) 00:22:37.906 fused_ordering(539) 00:22:37.906 fused_ordering(540) 00:22:37.906 fused_ordering(541) 00:22:37.906 fused_ordering(542) 00:22:37.906 fused_ordering(543) 00:22:37.906 fused_ordering(544) 00:22:37.906 fused_ordering(545) 00:22:37.906 fused_ordering(546) 00:22:37.906 fused_ordering(547) 00:22:37.906 fused_ordering(548) 00:22:37.906 fused_ordering(549) 00:22:37.906 fused_ordering(550) 00:22:37.906 fused_ordering(551) 00:22:37.906 fused_ordering(552) 00:22:37.906 fused_ordering(553) 00:22:37.906 fused_ordering(554) 00:22:37.906 fused_ordering(555) 00:22:37.906 fused_ordering(556) 00:22:37.906 fused_ordering(557) 00:22:37.906 fused_ordering(558) 00:22:37.906 fused_ordering(559) 00:22:37.906 fused_ordering(560) 00:22:37.906 fused_ordering(561) 00:22:37.906 fused_ordering(562) 00:22:37.906 fused_ordering(563) 00:22:37.906 fused_ordering(564) 00:22:37.906 fused_ordering(565) 00:22:37.906 fused_ordering(566) 00:22:37.906 fused_ordering(567) 00:22:37.906 fused_ordering(568) 00:22:37.906 fused_ordering(569) 00:22:37.906 fused_ordering(570) 00:22:37.906 fused_ordering(571) 00:22:37.906 fused_ordering(572) 00:22:37.906 fused_ordering(573) 00:22:37.906 fused_ordering(574) 00:22:37.906 fused_ordering(575) 00:22:37.906 fused_ordering(576) 00:22:37.906 fused_ordering(577) 00:22:37.906 fused_ordering(578) 00:22:37.906 fused_ordering(579) 00:22:37.906 fused_ordering(580) 00:22:37.906 fused_ordering(581) 00:22:37.907 fused_ordering(582) 00:22:37.907 fused_ordering(583) 00:22:37.907 fused_ordering(584) 00:22:37.907 fused_ordering(585) 00:22:37.907 fused_ordering(586) 00:22:37.907 fused_ordering(587) 00:22:37.907 fused_ordering(588) 00:22:37.907 fused_ordering(589) 00:22:37.907 fused_ordering(590) 00:22:37.907 fused_ordering(591) 00:22:37.907 fused_ordering(592) 00:22:37.907 fused_ordering(593) 00:22:37.907 fused_ordering(594) 00:22:37.907 fused_ordering(595) 00:22:37.907 fused_ordering(596) 00:22:37.907 fused_ordering(597) 00:22:37.907 fused_ordering(598) 00:22:37.907 fused_ordering(599) 00:22:37.907 fused_ordering(600) 00:22:37.907 fused_ordering(601) 00:22:37.907 fused_ordering(602) 00:22:37.907 fused_ordering(603) 00:22:37.907 fused_ordering(604) 00:22:37.907 fused_ordering(605) 00:22:37.907 fused_ordering(606) 00:22:37.907 fused_ordering(607) 00:22:37.907 fused_ordering(608) 00:22:37.907 fused_ordering(609) 00:22:37.907 fused_ordering(610) 00:22:37.907 fused_ordering(611) 00:22:37.907 fused_ordering(612) 00:22:37.907 fused_ordering(613) 00:22:37.907 fused_ordering(614) 00:22:37.907 fused_ordering(615) 00:22:38.167 fused_ordering(616) 00:22:38.167 fused_ordering(617) 00:22:38.167 fused_ordering(618) 00:22:38.167 fused_ordering(619) 00:22:38.167 fused_ordering(620) 00:22:38.167 fused_ordering(621) 00:22:38.167 fused_ordering(622) 00:22:38.167 fused_ordering(623) 00:22:38.167 fused_ordering(624) 00:22:38.167 fused_ordering(625) 00:22:38.167 fused_ordering(626) 00:22:38.167 fused_ordering(627) 00:22:38.167 fused_ordering(628) 00:22:38.167 fused_ordering(629) 00:22:38.167 fused_ordering(630) 00:22:38.167 fused_ordering(631) 00:22:38.167 fused_ordering(632) 00:22:38.167 fused_ordering(633) 00:22:38.167 fused_ordering(634) 00:22:38.167 fused_ordering(635) 00:22:38.167 fused_ordering(636) 00:22:38.167 fused_ordering(637) 00:22:38.167 fused_ordering(638) 00:22:38.167 fused_ordering(639) 00:22:38.167 fused_ordering(640) 00:22:38.167 fused_ordering(641) 00:22:38.167 fused_ordering(642) 00:22:38.167 fused_ordering(643) 00:22:38.167 fused_ordering(644) 00:22:38.167 fused_ordering(645) 00:22:38.167 fused_ordering(646) 00:22:38.167 fused_ordering(647) 00:22:38.167 fused_ordering(648) 00:22:38.167 fused_ordering(649) 00:22:38.167 fused_ordering(650) 00:22:38.167 fused_ordering(651) 00:22:38.167 fused_ordering(652) 00:22:38.167 fused_ordering(653) 00:22:38.167 fused_ordering(654) 00:22:38.167 fused_ordering(655) 00:22:38.167 fused_ordering(656) 00:22:38.167 fused_ordering(657) 00:22:38.167 fused_ordering(658) 00:22:38.167 fused_ordering(659) 00:22:38.167 fused_ordering(660) 00:22:38.167 fused_ordering(661) 00:22:38.167 fused_ordering(662) 00:22:38.167 fused_ordering(663) 00:22:38.167 fused_ordering(664) 00:22:38.167 fused_ordering(665) 00:22:38.167 fused_ordering(666) 00:22:38.167 fused_ordering(667) 00:22:38.167 fused_ordering(668) 00:22:38.167 fused_ordering(669) 00:22:38.167 fused_ordering(670) 00:22:38.167 fused_ordering(671) 00:22:38.167 fused_ordering(672) 00:22:38.167 fused_ordering(673) 00:22:38.167 fused_ordering(674) 00:22:38.167 fused_ordering(675) 00:22:38.167 fused_ordering(676) 00:22:38.167 fused_ordering(677) 00:22:38.167 fused_ordering(678) 00:22:38.167 fused_ordering(679) 00:22:38.167 fused_ordering(680) 00:22:38.167 fused_ordering(681) 00:22:38.167 fused_ordering(682) 00:22:38.167 fused_ordering(683) 00:22:38.167 fused_ordering(684) 00:22:38.167 fused_ordering(685) 00:22:38.167 fused_ordering(686) 00:22:38.167 fused_ordering(687) 00:22:38.167 fused_ordering(688) 00:22:38.167 fused_ordering(689) 00:22:38.167 fused_ordering(690) 00:22:38.167 fused_ordering(691) 00:22:38.167 fused_ordering(692) 00:22:38.167 fused_ordering(693) 00:22:38.167 fused_ordering(694) 00:22:38.167 fused_ordering(695) 00:22:38.167 fused_ordering(696) 00:22:38.167 fused_ordering(697) 00:22:38.167 fused_ordering(698) 00:22:38.167 fused_ordering(699) 00:22:38.167 fused_ordering(700) 00:22:38.167 fused_ordering(701) 00:22:38.167 fused_ordering(702) 00:22:38.167 fused_ordering(703) 00:22:38.167 fused_ordering(704) 00:22:38.167 fused_ordering(705) 00:22:38.167 fused_ordering(706) 00:22:38.167 fused_ordering(707) 00:22:38.167 fused_ordering(708) 00:22:38.167 fused_ordering(709) 00:22:38.167 fused_ordering(710) 00:22:38.167 fused_ordering(711) 00:22:38.167 fused_ordering(712) 00:22:38.167 fused_ordering(713) 00:22:38.167 fused_ordering(714) 00:22:38.167 fused_ordering(715) 00:22:38.167 fused_ordering(716) 00:22:38.167 fused_ordering(717) 00:22:38.167 fused_ordering(718) 00:22:38.167 fused_ordering(719) 00:22:38.167 fused_ordering(720) 00:22:38.167 fused_ordering(721) 00:22:38.167 fused_ordering(722) 00:22:38.167 fused_ordering(723) 00:22:38.167 fused_ordering(724) 00:22:38.167 fused_ordering(725) 00:22:38.167 fused_ordering(726) 00:22:38.167 fused_ordering(727) 00:22:38.167 fused_ordering(728) 00:22:38.167 fused_ordering(729) 00:22:38.167 fused_ordering(730) 00:22:38.167 fused_ordering(731) 00:22:38.167 fused_ordering(732) 00:22:38.167 fused_ordering(733) 00:22:38.167 fused_ordering(734) 00:22:38.167 fused_ordering(735) 00:22:38.167 fused_ordering(736) 00:22:38.167 fused_ordering(737) 00:22:38.167 fused_ordering(738) 00:22:38.167 fused_ordering(739) 00:22:38.167 fused_ordering(740) 00:22:38.167 fused_ordering(741) 00:22:38.167 fused_ordering(742) 00:22:38.167 fused_ordering(743) 00:22:38.167 fused_ordering(744) 00:22:38.167 fused_ordering(745) 00:22:38.167 fused_ordering(746) 00:22:38.167 fused_ordering(747) 00:22:38.167 fused_ordering(748) 00:22:38.167 fused_ordering(749) 00:22:38.167 fused_ordering(750) 00:22:38.167 fused_ordering(751) 00:22:38.167 fused_ordering(752) 00:22:38.167 fused_ordering(753) 00:22:38.167 fused_ordering(754) 00:22:38.167 fused_ordering(755) 00:22:38.167 fused_ordering(756) 00:22:38.167 fused_ordering(757) 00:22:38.167 fused_ordering(758) 00:22:38.167 fused_ordering(759) 00:22:38.167 fused_ordering(760) 00:22:38.167 fused_ordering(761) 00:22:38.167 fused_ordering(762) 00:22:38.167 fused_ordering(763) 00:22:38.167 fused_ordering(764) 00:22:38.167 fused_ordering(765) 00:22:38.167 fused_ordering(766) 00:22:38.167 fused_ordering(767) 00:22:38.167 fused_ordering(768) 00:22:38.167 fused_ordering(769) 00:22:38.167 fused_ordering(770) 00:22:38.167 fused_ordering(771) 00:22:38.167 fused_ordering(772) 00:22:38.167 fused_ordering(773) 00:22:38.167 fused_ordering(774) 00:22:38.167 fused_ordering(775) 00:22:38.167 fused_ordering(776) 00:22:38.167 fused_ordering(777) 00:22:38.167 fused_ordering(778) 00:22:38.167 fused_ordering(779) 00:22:38.167 fused_ordering(780) 00:22:38.167 fused_ordering(781) 00:22:38.167 fused_ordering(782) 00:22:38.167 fused_ordering(783) 00:22:38.167 fused_ordering(784) 00:22:38.167 fused_ordering(785) 00:22:38.167 fused_ordering(786) 00:22:38.167 fused_ordering(787) 00:22:38.167 fused_ordering(788) 00:22:38.167 fused_ordering(789) 00:22:38.167 fused_ordering(790) 00:22:38.167 fused_ordering(791) 00:22:38.167 fused_ordering(792) 00:22:38.167 fused_ordering(793) 00:22:38.167 fused_ordering(794) 00:22:38.167 fused_ordering(795) 00:22:38.167 fused_ordering(796) 00:22:38.167 fused_ordering(797) 00:22:38.167 fused_ordering(798) 00:22:38.167 fused_ordering(799) 00:22:38.167 fused_ordering(800) 00:22:38.167 fused_ordering(801) 00:22:38.167 fused_ordering(802) 00:22:38.167 fused_ordering(803) 00:22:38.167 fused_ordering(804) 00:22:38.167 fused_ordering(805) 00:22:38.167 fused_ordering(806) 00:22:38.167 fused_ordering(807) 00:22:38.167 fused_ordering(808) 00:22:38.167 fused_ordering(809) 00:22:38.167 fused_ordering(810) 00:22:38.167 fused_ordering(811) 00:22:38.167 fused_ordering(812) 00:22:38.167 fused_ordering(813) 00:22:38.167 fused_ordering(814) 00:22:38.167 fused_ordering(815) 00:22:38.167 fused_ordering(816) 00:22:38.167 fused_ordering(817) 00:22:38.167 fused_ordering(818) 00:22:38.168 fused_ordering(819) 00:22:38.168 fused_ordering(820) 00:22:38.427 fused_ordering(821) 00:22:38.427 fused_ordering(822) 00:22:38.427 fused_ordering(823) 00:22:38.427 fused_ordering(824) 00:22:38.427 fused_ordering(825) 00:22:38.427 fused_ordering(826) 00:22:38.427 fused_ordering(827) 00:22:38.427 fused_ordering(828) 00:22:38.427 fused_ordering(829) 00:22:38.427 fused_ordering(830) 00:22:38.427 fused_ordering(831) 00:22:38.427 fused_ordering(832) 00:22:38.427 fused_ordering(833) 00:22:38.427 fused_ordering(834) 00:22:38.427 fused_ordering(835) 00:22:38.427 fused_ordering(836) 00:22:38.427 fused_ordering(837) 00:22:38.427 fused_ordering(838) 00:22:38.427 fused_ordering(839) 00:22:38.427 fused_ordering(840) 00:22:38.427 fused_ordering(841) 00:22:38.427 fused_ordering(842) 00:22:38.427 fused_ordering(843) 00:22:38.427 fused_ordering(844) 00:22:38.427 fused_ordering(845) 00:22:38.427 fused_ordering(846) 00:22:38.427 fused_ordering(847) 00:22:38.427 fused_ordering(848) 00:22:38.427 fused_ordering(849) 00:22:38.427 fused_ordering(850) 00:22:38.427 fused_ordering(851) 00:22:38.427 fused_ordering(852) 00:22:38.427 fused_ordering(853) 00:22:38.427 fused_ordering(854) 00:22:38.427 fused_ordering(855) 00:22:38.427 fused_ordering(856) 00:22:38.427 fused_ordering(857) 00:22:38.427 fused_ordering(858) 00:22:38.427 fused_ordering(859) 00:22:38.427 fused_ordering(860) 00:22:38.427 fused_ordering(861) 00:22:38.427 fused_ordering(862) 00:22:38.427 fused_ordering(863) 00:22:38.427 fused_ordering(864) 00:22:38.427 fused_ordering(865) 00:22:38.427 fused_ordering(866) 00:22:38.427 fused_ordering(867) 00:22:38.427 fused_ordering(868) 00:22:38.427 fused_ordering(869) 00:22:38.427 fused_ordering(870) 00:22:38.427 fused_ordering(871) 00:22:38.427 fused_ordering(872) 00:22:38.427 fused_ordering(873) 00:22:38.427 fused_ordering(874) 00:22:38.427 fused_ordering(875) 00:22:38.427 fused_ordering(876) 00:22:38.427 fused_ordering(877) 00:22:38.427 fused_ordering(878) 00:22:38.427 fused_ordering(879) 00:22:38.427 fused_ordering(880) 00:22:38.427 fused_ordering(881) 00:22:38.427 fused_ordering(882) 00:22:38.427 fused_ordering(883) 00:22:38.427 fused_ordering(884) 00:22:38.427 fused_ordering(885) 00:22:38.427 fused_ordering(886) 00:22:38.427 fused_ordering(887) 00:22:38.427 fused_ordering(888) 00:22:38.427 fused_ordering(889) 00:22:38.427 fused_ordering(890) 00:22:38.427 fused_ordering(891) 00:22:38.427 fused_ordering(892) 00:22:38.427 fused_ordering(893) 00:22:38.427 fused_ordering(894) 00:22:38.427 fused_ordering(895) 00:22:38.427 fused_ordering(896) 00:22:38.427 fused_ordering(897) 00:22:38.427 fused_ordering(898) 00:22:38.427 fused_ordering(899) 00:22:38.427 fused_ordering(900) 00:22:38.428 fused_ordering(901) 00:22:38.428 fused_ordering(902) 00:22:38.428 fused_ordering(903) 00:22:38.428 fused_ordering(904) 00:22:38.428 fused_ordering(905) 00:22:38.428 fused_ordering(906) 00:22:38.428 fused_ordering(907) 00:22:38.428 fused_ordering(908) 00:22:38.428 fused_ordering(909) 00:22:38.428 fused_ordering(910) 00:22:38.428 fused_ordering(911) 00:22:38.428 fused_ordering(912) 00:22:38.428 fused_ordering(913) 00:22:38.428 fused_ordering(914) 00:22:38.428 fused_ordering(915) 00:22:38.428 fused_ordering(916) 00:22:38.428 fused_ordering(917) 00:22:38.428 fused_ordering(918) 00:22:38.428 fused_ordering(919) 00:22:38.428 fused_ordering(920) 00:22:38.428 fused_ordering(921) 00:22:38.428 fused_ordering(922) 00:22:38.428 fused_ordering(923) 00:22:38.428 fused_ordering(924) 00:22:38.428 fused_ordering(925) 00:22:38.428 fused_ordering(926) 00:22:38.428 fused_ordering(927) 00:22:38.428 fused_ordering(928) 00:22:38.428 fused_ordering(929) 00:22:38.428 fused_ordering(930) 00:22:38.428 fused_ordering(931) 00:22:38.428 fused_ordering(932) 00:22:38.428 fused_ordering(933) 00:22:38.428 fused_ordering(934) 00:22:38.428 fused_ordering(935) 00:22:38.428 fused_ordering(936) 00:22:38.428 fused_ordering(937) 00:22:38.428 fused_ordering(938) 00:22:38.428 fused_ordering(939) 00:22:38.428 fused_ordering(940) 00:22:38.428 fused_ordering(941) 00:22:38.428 fused_ordering(942) 00:22:38.428 fused_ordering(943) 00:22:38.428 fused_ordering(944) 00:22:38.428 fused_ordering(945) 00:22:38.428 fused_ordering(946) 00:22:38.428 fused_ordering(947) 00:22:38.428 fused_ordering(948) 00:22:38.428 fused_ordering(949) 00:22:38.428 fused_ordering(950) 00:22:38.428 fused_ordering(951) 00:22:38.428 fused_ordering(952) 00:22:38.428 fused_ordering(953) 00:22:38.428 fused_ordering(954) 00:22:38.428 fused_ordering(955) 00:22:38.428 fused_ordering(956) 00:22:38.428 fused_ordering(957) 00:22:38.428 fused_ordering(958) 00:22:38.428 fused_ordering(959) 00:22:38.428 fused_ordering(960) 00:22:38.428 fused_ordering(961) 00:22:38.428 fused_ordering(962) 00:22:38.428 fused_ordering(963) 00:22:38.428 fused_ordering(964) 00:22:38.428 fused_ordering(965) 00:22:38.428 fused_ordering(966) 00:22:38.428 fused_ordering(967) 00:22:38.428 fused_ordering(968) 00:22:38.428 fused_ordering(969) 00:22:38.428 fused_ordering(970) 00:22:38.428 fused_ordering(971) 00:22:38.428 fused_ordering(972) 00:22:38.428 fused_ordering(973) 00:22:38.428 fused_ordering(974) 00:22:38.428 fused_ordering(975) 00:22:38.428 fused_ordering(976) 00:22:38.428 fused_ordering(977) 00:22:38.428 fused_ordering(978) 00:22:38.428 fused_ordering(979) 00:22:38.428 fused_ordering(980) 00:22:38.428 fused_ordering(981) 00:22:38.428 fused_ordering(982) 00:22:38.428 fused_ordering(983) 00:22:38.428 fused_ordering(984) 00:22:38.428 fused_ordering(985) 00:22:38.428 fused_ordering(986) 00:22:38.428 fused_ordering(987) 00:22:38.428 fused_ordering(988) 00:22:38.428 fused_ordering(989) 00:22:38.428 fused_ordering(990) 00:22:38.428 fused_ordering(991) 00:22:38.428 fused_ordering(992) 00:22:38.428 fused_ordering(993) 00:22:38.428 fused_ordering(994) 00:22:38.428 fused_ordering(995) 00:22:38.428 fused_ordering(996) 00:22:38.428 fused_ordering(997) 00:22:38.428 fused_ordering(998) 00:22:38.428 fused_ordering(999) 00:22:38.428 fused_ordering(1000) 00:22:38.428 fused_ordering(1001) 00:22:38.428 fused_ordering(1002) 00:22:38.428 fused_ordering(1003) 00:22:38.428 fused_ordering(1004) 00:22:38.428 fused_ordering(1005) 00:22:38.428 fused_ordering(1006) 00:22:38.428 fused_ordering(1007) 00:22:38.428 fused_ordering(1008) 00:22:38.428 fused_ordering(1009) 00:22:38.428 fused_ordering(1010) 00:22:38.428 fused_ordering(1011) 00:22:38.428 fused_ordering(1012) 00:22:38.428 fused_ordering(1013) 00:22:38.428 fused_ordering(1014) 00:22:38.428 fused_ordering(1015) 00:22:38.428 fused_ordering(1016) 00:22:38.428 fused_ordering(1017) 00:22:38.428 fused_ordering(1018) 00:22:38.428 fused_ordering(1019) 00:22:38.428 fused_ordering(1020) 00:22:38.428 fused_ordering(1021) 00:22:38.428 fused_ordering(1022) 00:22:38.428 fused_ordering(1023) 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:38.428 rmmod nvme_rdma 00:22:38.428 rmmod nvme_fabrics 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1683272 ']' 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1683272 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1683272 ']' 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1683272 00:22:38.428 07:11:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:22:38.428 07:11:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.428 07:11:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1683272 00:22:38.428 07:11:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:38.428 07:11:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:38.428 07:11:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1683272' 00:22:38.428 killing process with pid 1683272 00:22:38.428 07:11:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1683272 00:22:38.687 07:11:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1683272 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:40.063 00:22:40.063 real 0m11.531s 00:22:40.063 user 0m6.367s 00:22:40.063 sys 0m6.776s 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:40.063 ************************************ 00:22:40.063 END TEST nvmf_fused_ordering 00:22:40.063 ************************************ 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:40.063 ************************************ 00:22:40.063 START TEST nvmf_ns_masking 00:22:40.063 ************************************ 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:22:40.063 * Looking for test storage... 00:22:40.063 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.063 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=66aefbc0-798e-40ad-9211-ed105d91902a 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c0298000-610b-48b6-a1e9-de6631435415 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=47a02729-64b9-4c67-806e-f4b110d56389 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.064 07:11:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:48.222 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:48.222 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:48.222 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.222 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:48.223 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:48.223 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:48.482 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:48.482 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:48.482 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:48.483 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:48.483 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:48.483 altname enp217s0f0np0 00:22:48.483 altname ens818f0np0 00:22:48.483 inet 192.168.100.8/24 scope global mlx_0_0 00:22:48.483 valid_lft forever preferred_lft forever 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:48.483 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:48.483 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:48.483 altname enp217s0f1np1 00:22:48.483 altname ens818f1np1 00:22:48.483 inet 192.168.100.9/24 scope global mlx_0_1 00:22:48.483 valid_lft forever preferred_lft forever 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:48.483 192.168.100.9' 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:48.483 192.168.100.9' 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:48.483 192.168.100.9' 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:22:48.483 07:12:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1687922 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1687922 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1687922 ']' 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.483 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.484 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.484 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:48.743 [2024-07-24 07:12:03.131201] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:22:48.743 [2024-07-24 07:12:03.131295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.743 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.743 [2024-07-24 07:12:03.280699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.002 [2024-07-24 07:12:03.502215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.002 [2024-07-24 07:12:03.502262] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.002 [2024-07-24 07:12:03.502277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.002 [2024-07-24 07:12:03.502311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.002 [2024-07-24 07:12:03.502323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.002 [2024-07-24 07:12:03.502359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.260 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.260 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:22:49.519 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:49.519 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:49.519 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:49.519 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.519 07:12:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:49.519 [2024-07-24 07:12:04.115298] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7fb97dcdf940) succeed. 00:22:49.519 [2024-07-24 07:12:04.124578] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7fb97dc98940) succeed. 00:22:49.778 07:12:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:22:49.778 07:12:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:22:49.778 07:12:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:50.037 Malloc1 00:22:50.037 07:12:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:22:50.296 Malloc2 00:22:50.296 07:12:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:50.296 07:12:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:22:50.555 07:12:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:50.814 [2024-07-24 07:12:05.242894] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:50.814 07:12:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:22:50.814 07:12:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47a02729-64b9-4c67-806e-f4b110d56389 -a 192.168.100.8 -s 4420 -i 4 00:22:51.073 07:12:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:22:51.073 07:12:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:22:51.073 07:12:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:22:51.073 07:12:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:22:51.073 07:12:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:22:52.977 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:22:52.977 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:22:52.977 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:22:52.977 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:22:52.977 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:22:52.977 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:22:52.977 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:22:52.977 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:53.236 [ 0]:0x1 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=541ffad5210e4a31a3f1175fbf31bf2b 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 541ffad5210e4a31a3f1175fbf31bf2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:53.236 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:53.236 [ 0]:0x1 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=541ffad5210e4a31a3f1175fbf31bf2b 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 541ffad5210e4a31a3f1175fbf31bf2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:53.496 [ 1]:0x2 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3b73f3aa8ea6433a88ac010f5b156ff9 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3b73f3aa8ea6433a88ac010f5b156ff9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:22:53.496 07:12:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:53.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:53.754 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:54.013 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:22:54.272 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:22:54.272 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47a02729-64b9-4c67-806e-f4b110d56389 -a 192.168.100.8 -s 4420 -i 4 00:22:54.530 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:22:54.530 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:22:54.530 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:22:54.530 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:22:54.530 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:22:54.530 07:12:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:22:56.435 07:12:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:22:56.435 07:12:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:22:56.435 07:12:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:22:56.435 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:56.695 [ 0]:0x2 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3b73f3aa8ea6433a88ac010f5b156ff9 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3b73f3aa8ea6433a88ac010f5b156ff9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:56.695 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:56.954 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:56.955 [ 0]:0x1 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=541ffad5210e4a31a3f1175fbf31bf2b 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 541ffad5210e4a31a3f1175fbf31bf2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:56.955 [ 1]:0x2 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3b73f3aa8ea6433a88ac010f5b156ff9 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3b73f3aa8ea6433a88ac010f5b156ff9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:56.955 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:57.214 [ 0]:0x2 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3b73f3aa8ea6433a88ac010f5b156ff9 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3b73f3aa8ea6433a88ac010f5b156ff9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:22:57.214 07:12:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:57.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:57.473 07:12:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:57.733 07:12:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:22:57.733 07:12:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47a02729-64b9-4c67-806e-f4b110d56389 -a 192.168.100.8 -s 4420 -i 4 00:22:57.992 07:12:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:22:57.992 07:12:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:22:57.992 07:12:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:22:57.992 07:12:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:22:57.992 07:12:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:22:57.992 07:12:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:22:59.902 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:22:59.902 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:22:59.902 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:00.163 [ 0]:0x1 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=541ffad5210e4a31a3f1175fbf31bf2b 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 541ffad5210e4a31a3f1175fbf31bf2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:00.163 [ 1]:0x2 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3b73f3aa8ea6433a88ac010f5b156ff9 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3b73f3aa8ea6433a88ac010f5b156ff9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:00.163 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:00.460 [ 0]:0x2 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3b73f3aa8ea6433a88ac010f5b156ff9 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3b73f3aa8ea6433a88ac010f5b156ff9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:23:00.460 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:23:00.461 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:00.461 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.461 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:00.461 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.461 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:00.461 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.461 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:00.461 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:23:00.461 07:12:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:23:00.720 [2024-07-24 07:12:15.134888] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:23:00.720 request: 00:23:00.720 { 00:23:00.720 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.720 "nsid": 2, 00:23:00.720 "host": "nqn.2016-06.io.spdk:host1", 00:23:00.720 "method": "nvmf_ns_remove_host", 00:23:00.720 "req_id": 1 00:23:00.720 } 00:23:00.720 Got JSON-RPC error response 00:23:00.720 response: 00:23:00.720 { 00:23:00.720 "code": -32602, 00:23:00.720 "message": "Invalid parameters" 00:23:00.720 } 00:23:00.720 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:23:00.720 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:00.720 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:00.720 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:00.720 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:23:00.720 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:23:00.720 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:23:00.720 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:23:00.720 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:00.721 [ 0]:0x2 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3b73f3aa8ea6433a88ac010f5b156ff9 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3b73f3aa8ea6433a88ac010f5b156ff9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:23:00.721 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:00.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1690590 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1690590 /var/tmp/host.sock 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1690590 ']' 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:23:00.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.980 07:12:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:01.239 [2024-07-24 07:12:15.663374] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:01.239 [2024-07-24 07:12:15.663471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690590 ] 00:23:01.239 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.239 [2024-07-24 07:12:15.813522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.498 [2024-07-24 07:12:16.023363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.436 07:12:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.436 07:12:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:23:02.436 07:12:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:02.695 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:02.695 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 66aefbc0-798e-40ad-9211-ed105d91902a 00:23:02.695 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:23:02.695 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 66AEFBC0798E40AD9211ED105D91902A -i 00:23:02.954 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c0298000-610b-48b6-a1e9-de6631435415 00:23:02.954 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:23:02.954 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C0298000610B48B6A1E9DE6631435415 -i 00:23:03.213 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:03.213 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:23:03.472 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:23:03.472 07:12:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:23:03.731 nvme0n1 00:23:03.731 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:23:03.731 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:23:03.990 nvme1n2 00:23:03.990 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:23:03.990 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:23:03.990 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:23:03.990 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:23:03.990 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:23:04.250 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:23:04.250 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:23:04.250 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:23:04.250 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:23:04.250 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 66aefbc0-798e-40ad-9211-ed105d91902a == \6\6\a\e\f\b\c\0\-\7\9\8\e\-\4\0\a\d\-\9\2\1\1\-\e\d\1\0\5\d\9\1\9\0\2\a ]] 00:23:04.250 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:23:04.250 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:23:04.250 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:23:04.509 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c0298000-610b-48b6-a1e9-de6631435415 == \c\0\2\9\8\0\0\0\-\6\1\0\b\-\4\8\b\6\-\a\1\e\9\-\d\e\6\6\3\1\4\3\5\4\1\5 ]] 00:23:04.509 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1690590 00:23:04.509 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1690590 ']' 00:23:04.509 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1690590 00:23:04.509 07:12:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:23:04.509 07:12:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.509 07:12:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1690590 00:23:04.509 07:12:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:04.509 07:12:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:04.509 07:12:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1690590' 00:23:04.509 killing process with pid 1690590 00:23:04.509 07:12:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1690590 00:23:04.509 07:12:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1690590 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.045 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:07.045 rmmod nvme_rdma 00:23:07.045 rmmod nvme_fabrics 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1687922 ']' 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1687922 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1687922 ']' 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1687922 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1687922 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1687922' 00:23:07.046 killing process with pid 1687922 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1687922 00:23:07.046 07:12:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1687922 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:08.949 00:23:08.949 real 0m28.924s 00:23:08.949 user 0m32.436s 00:23:08.949 sys 0m9.073s 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:08.949 ************************************ 00:23:08.949 END TEST nvmf_ns_masking 00:23:08.949 ************************************ 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:08.949 ************************************ 00:23:08.949 START TEST nvmf_nvme_cli 00:23:08.949 ************************************ 00:23:08.949 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:23:09.208 * Looking for test storage... 00:23:09.208 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.208 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.209 07:12:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:17.332 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:17.332 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:17.332 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:17.332 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:17.332 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:17.332 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:17.332 altname enp217s0f0np0 00:23:17.332 altname ens818f0np0 00:23:17.332 inet 192.168.100.8/24 scope global mlx_0_0 00:23:17.332 valid_lft forever preferred_lft forever 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:17.332 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:17.333 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:17.333 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:17.333 altname enp217s0f1np1 00:23:17.333 altname ens818f1np1 00:23:17.333 inet 192.168.100.9/24 scope global mlx_0_1 00:23:17.333 valid_lft forever preferred_lft forever 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:17.333 192.168.100.9' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:17.333 192.168.100.9' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:17.333 192.168.100.9' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1695968 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1695968 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1695968 ']' 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.333 07:12:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:17.333 [2024-07-24 07:12:31.948835] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:17.333 [2024-07-24 07:12:31.948934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.592 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.592 [2024-07-24 07:12:32.096733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.850 [2024-07-24 07:12:32.300062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.850 [2024-07-24 07:12:32.300109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.850 [2024-07-24 07:12:32.300123] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.850 [2024-07-24 07:12:32.300134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.850 [2024-07-24 07:12:32.300145] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.850 [2024-07-24 07:12:32.300274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.850 [2024-07-24 07:12:32.300406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.850 [2024-07-24 07:12:32.300492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.850 [2024-07-24 07:12:32.300503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.109 07:12:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.109 07:12:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:23:18.109 07:12:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.109 07:12:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.109 07:12:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:18.369 07:12:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.369 07:12:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:18.369 07:12:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.369 07:12:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:18.369 [2024-07-24 07:12:32.796221] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f5261de6940) succeed. 00:23:18.369 [2024-07-24 07:12:32.806430] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f5261d9f940) succeed. 00:23:18.628 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.628 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:18.628 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.628 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:18.628 Malloc0 00:23:18.628 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.628 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:18.628 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.628 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:18.888 Malloc1 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:18.888 [2024-07-24 07:12:33.326133] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:23:18.888 00:23:18.888 Discovery Log Number of Records 2, Generation counter 2 00:23:18.888 =====Discovery Log Entry 0====== 00:23:18.888 trtype: rdma 00:23:18.888 adrfam: ipv4 00:23:18.888 subtype: current discovery subsystem 00:23:18.888 treq: not required 00:23:18.888 portid: 0 00:23:18.888 trsvcid: 4420 00:23:18.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:18.888 traddr: 192.168.100.8 00:23:18.888 eflags: explicit discovery connections, duplicate discovery information 00:23:18.888 rdma_prtype: not specified 00:23:18.888 rdma_qptype: connected 00:23:18.888 rdma_cms: rdma-cm 00:23:18.888 rdma_pkey: 0x0000 00:23:18.888 =====Discovery Log Entry 1====== 00:23:18.888 trtype: rdma 00:23:18.888 adrfam: ipv4 00:23:18.888 subtype: nvme subsystem 00:23:18.888 treq: not required 00:23:18.888 portid: 0 00:23:18.888 trsvcid: 4420 00:23:18.888 subnqn: nqn.2016-06.io.spdk:cnode1 00:23:18.888 traddr: 192.168.100.8 00:23:18.888 eflags: none 00:23:18.888 rdma_prtype: not specified 00:23:18.888 rdma_qptype: connected 00:23:18.888 rdma_cms: rdma-cm 00:23:18.888 rdma_pkey: 0x0000 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:23:18.888 07:12:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:19.823 07:12:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:23:19.823 07:12:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local i=0 00:23:19.823 07:12:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:23:19.823 07:12:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:23:19.823 07:12:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:23:19.823 07:12:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # sleep 2 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # return 0 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:23:22.389 /dev/nvme0n1 ]] 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:23:22.389 07:12:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:22.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # local i=0 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # return 0 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:23:22.957 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:22.958 rmmod nvme_rdma 00:23:22.958 rmmod nvme_fabrics 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1695968 ']' 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1695968 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1695968 ']' 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1695968 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.958 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1695968 00:23:23.216 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:23.216 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:23.216 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1695968' 00:23:23.216 killing process with pid 1695968 00:23:23.216 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1695968 00:23:23.216 07:12:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1695968 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:25.753 00:23:25.753 real 0m16.334s 00:23:25.753 user 0m30.105s 00:23:25.753 sys 0m7.051s 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:25.753 ************************************ 00:23:25.753 END TEST nvmf_nvme_cli 00:23:25.753 ************************************ 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:25.753 ************************************ 00:23:25.753 START TEST nvmf_auth_target 00:23:25.753 ************************************ 00:23:25.753 07:12:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:23:25.753 * Looking for test storage... 00:23:25.753 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:23:25.753 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.754 07:12:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:33.874 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:33.874 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:33.874 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:33.874 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:33.874 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:33.875 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:33.875 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:33.875 altname enp217s0f0np0 00:23:33.875 altname ens818f0np0 00:23:33.875 inet 192.168.100.8/24 scope global mlx_0_0 00:23:33.875 valid_lft forever preferred_lft forever 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:33.875 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:33.875 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:33.875 altname enp217s0f1np1 00:23:33.875 altname ens818f1np1 00:23:33.875 inet 192.168.100.9/24 scope global mlx_0_1 00:23:33.875 valid_lft forever preferred_lft forever 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:33.875 192.168.100.9' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:33.875 192.168.100.9' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:33.875 192.168.100.9' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1701359 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1701359 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1701359 ']' 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.875 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.876 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.876 07:12:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1701491 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d85d99257667946874d34ac54b4f7d3e864f64fd40e65ac2 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Hjn 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d85d99257667946874d34ac54b4f7d3e864f64fd40e65ac2 0 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d85d99257667946874d34ac54b4f7d3e864f64fd40e65ac2 0 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d85d99257667946874d34ac54b4f7d3e864f64fd40e65ac2 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Hjn 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Hjn 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Hjn 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=69da23b86daecc3be230ba46f70003fc40d78d11609c31cc959919eada2f7c50 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kZq 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 69da23b86daecc3be230ba46f70003fc40d78d11609c31cc959919eada2f7c50 3 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 69da23b86daecc3be230ba46f70003fc40d78d11609c31cc959919eada2f7c50 3 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=69da23b86daecc3be230ba46f70003fc40d78d11609c31cc959919eada2f7c50 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kZq 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kZq 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.kZq 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9010a472946c857c45fc9f4d7d8ffa15 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.DyX 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9010a472946c857c45fc9f4d7d8ffa15 1 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9010a472946c857c45fc9f4d7d8ffa15 1 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9010a472946c857c45fc9f4d7d8ffa15 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.DyX 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.DyX 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.DyX 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a213b6475c4b225ef992965445e2c41499271171b1804582 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.beG 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a213b6475c4b225ef992965445e2c41499271171b1804582 2 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a213b6475c4b225ef992965445e2c41499271171b1804582 2 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a213b6475c4b225ef992965445e2c41499271171b1804582 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:23:34.814 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.beG 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.beG 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.beG 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7dadebbcc2ddf73e048b6a35393aafa46a6005c380f0289d 00:23:34.815 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fpy 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7dadebbcc2ddf73e048b6a35393aafa46a6005c380f0289d 2 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7dadebbcc2ddf73e048b6a35393aafa46a6005c380f0289d 2 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7dadebbcc2ddf73e048b6a35393aafa46a6005c380f0289d 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fpy 00:23:35.074 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fpy 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.fpy 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bba3188979bd044f897fb36c9f2a2f5c 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BSO 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bba3188979bd044f897fb36c9f2a2f5c 1 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bba3188979bd044f897fb36c9f2a2f5c 1 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bba3188979bd044f897fb36c9f2a2f5c 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BSO 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BSO 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.BSO 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fd30a4371a3693b84815ebf57d38fe02e312a1723eb771f89e2b530e6204d7e3 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MBM 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fd30a4371a3693b84815ebf57d38fe02e312a1723eb771f89e2b530e6204d7e3 3 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fd30a4371a3693b84815ebf57d38fe02e312a1723eb771f89e2b530e6204d7e3 3 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fd30a4371a3693b84815ebf57d38fe02e312a1723eb771f89e2b530e6204d7e3 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MBM 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MBM 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.MBM 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1701359 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1701359 ']' 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.075 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.335 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.335 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:35.335 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1701491 /var/tmp/host.sock 00:23:35.335 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1701491 ']' 00:23:35.335 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:23:35.335 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.335 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:23:35.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:23:35.335 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.335 07:12:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.902 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.902 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:35.902 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:23:35.902 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.902 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.161 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.161 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:23:36.161 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Hjn 00:23:36.161 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.161 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.161 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.161 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Hjn 00:23:36.161 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Hjn 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.kZq ]] 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kZq 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kZq 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kZq 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.DyX 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.DyX 00:23:36.421 07:12:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.DyX 00:23:36.680 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.beG ]] 00:23:36.680 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.beG 00:23:36.680 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.680 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.680 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.680 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.beG 00:23:36.680 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.beG 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fpy 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fpy 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fpy 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.BSO ]] 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BSO 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BSO 00:23:36.940 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BSO 00:23:37.199 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:23:37.199 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MBM 00:23:37.199 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.199 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.199 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.199 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.MBM 00:23:37.199 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.MBM 00:23:37.458 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:23:37.458 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:37.458 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.458 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:37.458 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:37.458 07:12:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.717 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.977 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:37.977 { 00:23:37.977 "cntlid": 1, 00:23:37.977 "qid": 0, 00:23:37.977 "state": "enabled", 00:23:37.977 "thread": "nvmf_tgt_poll_group_000", 00:23:37.977 "listen_address": { 00:23:37.977 "trtype": "RDMA", 00:23:37.977 "adrfam": "IPv4", 00:23:37.977 "traddr": "192.168.100.8", 00:23:37.977 "trsvcid": "4420" 00:23:37.977 }, 00:23:37.977 "peer_address": { 00:23:37.977 "trtype": "RDMA", 00:23:37.977 "adrfam": "IPv4", 00:23:37.977 "traddr": "192.168.100.8", 00:23:37.977 "trsvcid": "50551" 00:23:37.977 }, 00:23:37.977 "auth": { 00:23:37.977 "state": "completed", 00:23:37.977 "digest": "sha256", 00:23:37.977 "dhgroup": "null" 00:23:37.977 } 00:23:37.977 } 00:23:37.977 ]' 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:37.977 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:38.236 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:38.236 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:38.236 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.236 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.236 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.495 07:12:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:23:39.062 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.062 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:39.062 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.062 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.062 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.062 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:39.062 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:39.062 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.321 07:12:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.581 00:23:39.581 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:39.581 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:39.581 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.581 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.581 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.581 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.581 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:39.840 { 00:23:39.840 "cntlid": 3, 00:23:39.840 "qid": 0, 00:23:39.840 "state": "enabled", 00:23:39.840 "thread": "nvmf_tgt_poll_group_000", 00:23:39.840 "listen_address": { 00:23:39.840 "trtype": "RDMA", 00:23:39.840 "adrfam": "IPv4", 00:23:39.840 "traddr": "192.168.100.8", 00:23:39.840 "trsvcid": "4420" 00:23:39.840 }, 00:23:39.840 "peer_address": { 00:23:39.840 "trtype": "RDMA", 00:23:39.840 "adrfam": "IPv4", 00:23:39.840 "traddr": "192.168.100.8", 00:23:39.840 "trsvcid": "48759" 00:23:39.840 }, 00:23:39.840 "auth": { 00:23:39.840 "state": "completed", 00:23:39.840 "digest": "sha256", 00:23:39.840 "dhgroup": "null" 00:23:39.840 } 00:23:39.840 } 00:23:39.840 ]' 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.840 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.101 07:12:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:23:40.756 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.756 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:40.756 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.756 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.756 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.756 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:40.756 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:40.756 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.015 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.275 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:41.275 { 00:23:41.275 "cntlid": 5, 00:23:41.275 "qid": 0, 00:23:41.275 "state": "enabled", 00:23:41.275 "thread": "nvmf_tgt_poll_group_000", 00:23:41.275 "listen_address": { 00:23:41.275 "trtype": "RDMA", 00:23:41.275 "adrfam": "IPv4", 00:23:41.275 "traddr": "192.168.100.8", 00:23:41.275 "trsvcid": "4420" 00:23:41.275 }, 00:23:41.275 "peer_address": { 00:23:41.275 "trtype": "RDMA", 00:23:41.275 "adrfam": "IPv4", 00:23:41.275 "traddr": "192.168.100.8", 00:23:41.275 "trsvcid": "49479" 00:23:41.275 }, 00:23:41.275 "auth": { 00:23:41.275 "state": "completed", 00:23:41.275 "digest": "sha256", 00:23:41.275 "dhgroup": "null" 00:23:41.275 } 00:23:41.275 } 00:23:41.275 ]' 00:23:41.275 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:41.534 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:41.534 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:41.534 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:41.534 07:12:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:41.534 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.534 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.534 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.793 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:23:42.362 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.362 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:42.362 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.362 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.362 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.362 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:42.362 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:42.362 07:12:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.621 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.880 00:23:42.880 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:42.880 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:42.880 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:43.140 { 00:23:43.140 "cntlid": 7, 00:23:43.140 "qid": 0, 00:23:43.140 "state": "enabled", 00:23:43.140 "thread": "nvmf_tgt_poll_group_000", 00:23:43.140 "listen_address": { 00:23:43.140 "trtype": "RDMA", 00:23:43.140 "adrfam": "IPv4", 00:23:43.140 "traddr": "192.168.100.8", 00:23:43.140 "trsvcid": "4420" 00:23:43.140 }, 00:23:43.140 "peer_address": { 00:23:43.140 "trtype": "RDMA", 00:23:43.140 "adrfam": "IPv4", 00:23:43.140 "traddr": "192.168.100.8", 00:23:43.140 "trsvcid": "38901" 00:23:43.140 }, 00:23:43.140 "auth": { 00:23:43.140 "state": "completed", 00:23:43.140 "digest": "sha256", 00:23:43.140 "dhgroup": "null" 00:23:43.140 } 00:23:43.140 } 00:23:43.140 ]' 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.140 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.399 07:12:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:23:43.966 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.223 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.224 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.224 07:12:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.482 00:23:44.482 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:44.482 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:44.482 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:44.741 { 00:23:44.741 "cntlid": 9, 00:23:44.741 "qid": 0, 00:23:44.741 "state": "enabled", 00:23:44.741 "thread": "nvmf_tgt_poll_group_000", 00:23:44.741 "listen_address": { 00:23:44.741 "trtype": "RDMA", 00:23:44.741 "adrfam": "IPv4", 00:23:44.741 "traddr": "192.168.100.8", 00:23:44.741 "trsvcid": "4420" 00:23:44.741 }, 00:23:44.741 "peer_address": { 00:23:44.741 "trtype": "RDMA", 00:23:44.741 "adrfam": "IPv4", 00:23:44.741 "traddr": "192.168.100.8", 00:23:44.741 "trsvcid": "55971" 00:23:44.741 }, 00:23:44.741 "auth": { 00:23:44.741 "state": "completed", 00:23:44.741 "digest": "sha256", 00:23:44.741 "dhgroup": "ffdhe2048" 00:23:44.741 } 00:23:44.741 } 00:23:44.741 ]' 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.741 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.000 07:12:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:23:45.568 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.827 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:45.827 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.827 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.827 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.827 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:45.827 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.828 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.086 00:23:46.086 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:46.086 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.086 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:46.345 { 00:23:46.345 "cntlid": 11, 00:23:46.345 "qid": 0, 00:23:46.345 "state": "enabled", 00:23:46.345 "thread": "nvmf_tgt_poll_group_000", 00:23:46.345 "listen_address": { 00:23:46.345 "trtype": "RDMA", 00:23:46.345 "adrfam": "IPv4", 00:23:46.345 "traddr": "192.168.100.8", 00:23:46.345 "trsvcid": "4420" 00:23:46.345 }, 00:23:46.345 "peer_address": { 00:23:46.345 "trtype": "RDMA", 00:23:46.345 "adrfam": "IPv4", 00:23:46.345 "traddr": "192.168.100.8", 00:23:46.345 "trsvcid": "50442" 00:23:46.345 }, 00:23:46.345 "auth": { 00:23:46.345 "state": "completed", 00:23:46.345 "digest": "sha256", 00:23:46.345 "dhgroup": "ffdhe2048" 00:23:46.345 } 00:23:46.345 } 00:23:46.345 ]' 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:46.345 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:46.603 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.603 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.603 07:13:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.603 07:13:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:23:47.184 07:13:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.443 07:13:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:47.443 07:13:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.443 07:13:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.443 07:13:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.443 07:13:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:47.443 07:13:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:47.443 07:13:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.702 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.960 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:47.960 { 00:23:47.960 "cntlid": 13, 00:23:47.960 "qid": 0, 00:23:47.960 "state": "enabled", 00:23:47.960 "thread": "nvmf_tgt_poll_group_000", 00:23:47.960 "listen_address": { 00:23:47.960 "trtype": "RDMA", 00:23:47.960 "adrfam": "IPv4", 00:23:47.960 "traddr": "192.168.100.8", 00:23:47.960 "trsvcid": "4420" 00:23:47.960 }, 00:23:47.960 "peer_address": { 00:23:47.960 "trtype": "RDMA", 00:23:47.960 "adrfam": "IPv4", 00:23:47.960 "traddr": "192.168.100.8", 00:23:47.960 "trsvcid": "59517" 00:23:47.960 }, 00:23:47.960 "auth": { 00:23:47.960 "state": "completed", 00:23:47.960 "digest": "sha256", 00:23:47.960 "dhgroup": "ffdhe2048" 00:23:47.960 } 00:23:47.960 } 00:23:47.960 ]' 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:47.960 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:48.219 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:48.219 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:48.219 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:48.219 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.219 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.219 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.477 07:13:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:23:49.043 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.043 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:49.043 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.043 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.043 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.043 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:49.043 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.043 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:49.302 07:13:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:49.559 00:23:49.559 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:49.559 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:49.559 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:49.817 { 00:23:49.817 "cntlid": 15, 00:23:49.817 "qid": 0, 00:23:49.817 "state": "enabled", 00:23:49.817 "thread": "nvmf_tgt_poll_group_000", 00:23:49.817 "listen_address": { 00:23:49.817 "trtype": "RDMA", 00:23:49.817 "adrfam": "IPv4", 00:23:49.817 "traddr": "192.168.100.8", 00:23:49.817 "trsvcid": "4420" 00:23:49.817 }, 00:23:49.817 "peer_address": { 00:23:49.817 "trtype": "RDMA", 00:23:49.817 "adrfam": "IPv4", 00:23:49.817 "traddr": "192.168.100.8", 00:23:49.817 "trsvcid": "59565" 00:23:49.817 }, 00:23:49.817 "auth": { 00:23:49.817 "state": "completed", 00:23:49.817 "digest": "sha256", 00:23:49.817 "dhgroup": "ffdhe2048" 00:23:49.817 } 00:23:49.817 } 00:23:49.817 ]' 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.817 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.076 07:13:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:23:50.643 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.643 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:50.643 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.643 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.643 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.643 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.643 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:50.643 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.643 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.901 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.160 00:23:51.160 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:51.160 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:51.160 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:51.418 { 00:23:51.418 "cntlid": 17, 00:23:51.418 "qid": 0, 00:23:51.418 "state": "enabled", 00:23:51.418 "thread": "nvmf_tgt_poll_group_000", 00:23:51.418 "listen_address": { 00:23:51.418 "trtype": "RDMA", 00:23:51.418 "adrfam": "IPv4", 00:23:51.418 "traddr": "192.168.100.8", 00:23:51.418 "trsvcid": "4420" 00:23:51.418 }, 00:23:51.418 "peer_address": { 00:23:51.418 "trtype": "RDMA", 00:23:51.418 "adrfam": "IPv4", 00:23:51.418 "traddr": "192.168.100.8", 00:23:51.418 "trsvcid": "41684" 00:23:51.418 }, 00:23:51.418 "auth": { 00:23:51.418 "state": "completed", 00:23:51.418 "digest": "sha256", 00:23:51.418 "dhgroup": "ffdhe3072" 00:23:51.418 } 00:23:51.418 } 00:23:51.418 ]' 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:51.418 07:13:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:51.418 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:51.418 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:51.418 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.677 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:23:52.244 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:52.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:52.504 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:52.504 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.504 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.504 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.504 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:52.504 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:52.504 07:13:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.504 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.763 00:23:52.763 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:52.763 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:52.763 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:53.022 { 00:23:53.022 "cntlid": 19, 00:23:53.022 "qid": 0, 00:23:53.022 "state": "enabled", 00:23:53.022 "thread": "nvmf_tgt_poll_group_000", 00:23:53.022 "listen_address": { 00:23:53.022 "trtype": "RDMA", 00:23:53.022 "adrfam": "IPv4", 00:23:53.022 "traddr": "192.168.100.8", 00:23:53.022 "trsvcid": "4420" 00:23:53.022 }, 00:23:53.022 "peer_address": { 00:23:53.022 "trtype": "RDMA", 00:23:53.022 "adrfam": "IPv4", 00:23:53.022 "traddr": "192.168.100.8", 00:23:53.022 "trsvcid": "60387" 00:23:53.022 }, 00:23:53.022 "auth": { 00:23:53.022 "state": "completed", 00:23:53.022 "digest": "sha256", 00:23:53.022 "dhgroup": "ffdhe3072" 00:23:53.022 } 00:23:53.022 } 00:23:53.022 ]' 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:53.022 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:53.341 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:53.342 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:53.342 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.342 07:13:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:23:53.911 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:54.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:54.170 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:54.170 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.170 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.170 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.170 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:54.170 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:54.170 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.429 07:13:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.429 00:23:54.429 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:54.429 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:54.429 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.688 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.688 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.688 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.688 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.688 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.688 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:54.688 { 00:23:54.688 "cntlid": 21, 00:23:54.688 "qid": 0, 00:23:54.688 "state": "enabled", 00:23:54.688 "thread": "nvmf_tgt_poll_group_000", 00:23:54.688 "listen_address": { 00:23:54.688 "trtype": "RDMA", 00:23:54.688 "adrfam": "IPv4", 00:23:54.688 "traddr": "192.168.100.8", 00:23:54.688 "trsvcid": "4420" 00:23:54.688 }, 00:23:54.688 "peer_address": { 00:23:54.688 "trtype": "RDMA", 00:23:54.688 "adrfam": "IPv4", 00:23:54.688 "traddr": "192.168.100.8", 00:23:54.688 "trsvcid": "37956" 00:23:54.688 }, 00:23:54.688 "auth": { 00:23:54.688 "state": "completed", 00:23:54.688 "digest": "sha256", 00:23:54.688 "dhgroup": "ffdhe3072" 00:23:54.688 } 00:23:54.688 } 00:23:54.688 ]' 00:23:54.688 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:54.688 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:54.688 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:54.947 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:54.947 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:54.947 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.947 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.947 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.947 07:13:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:55.885 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:56.145 00:23:56.145 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:56.145 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:56.145 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:56.404 { 00:23:56.404 "cntlid": 23, 00:23:56.404 "qid": 0, 00:23:56.404 "state": "enabled", 00:23:56.404 "thread": "nvmf_tgt_poll_group_000", 00:23:56.404 "listen_address": { 00:23:56.404 "trtype": "RDMA", 00:23:56.404 "adrfam": "IPv4", 00:23:56.404 "traddr": "192.168.100.8", 00:23:56.404 "trsvcid": "4420" 00:23:56.404 }, 00:23:56.404 "peer_address": { 00:23:56.404 "trtype": "RDMA", 00:23:56.404 "adrfam": "IPv4", 00:23:56.404 "traddr": "192.168.100.8", 00:23:56.404 "trsvcid": "60554" 00:23:56.404 }, 00:23:56.404 "auth": { 00:23:56.404 "state": "completed", 00:23:56.404 "digest": "sha256", 00:23:56.404 "dhgroup": "ffdhe3072" 00:23:56.404 } 00:23:56.404 } 00:23:56.404 ]' 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:56.404 07:13:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:56.404 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.404 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.404 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.663 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:23:57.230 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.489 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:57.489 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.489 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.489 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.489 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:57.489 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:57.489 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:57.489 07:13:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.748 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.749 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:58.008 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:58.008 { 00:23:58.008 "cntlid": 25, 00:23:58.008 "qid": 0, 00:23:58.008 "state": "enabled", 00:23:58.008 "thread": "nvmf_tgt_poll_group_000", 00:23:58.008 "listen_address": { 00:23:58.008 "trtype": "RDMA", 00:23:58.008 "adrfam": "IPv4", 00:23:58.008 "traddr": "192.168.100.8", 00:23:58.008 "trsvcid": "4420" 00:23:58.008 }, 00:23:58.008 "peer_address": { 00:23:58.008 "trtype": "RDMA", 00:23:58.008 "adrfam": "IPv4", 00:23:58.008 "traddr": "192.168.100.8", 00:23:58.008 "trsvcid": "60572" 00:23:58.008 }, 00:23:58.008 "auth": { 00:23:58.008 "state": "completed", 00:23:58.008 "digest": "sha256", 00:23:58.008 "dhgroup": "ffdhe4096" 00:23:58.008 } 00:23:58.008 } 00:23:58.008 ]' 00:23:58.008 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:58.267 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:58.267 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:58.267 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:58.267 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:58.267 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.267 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.267 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.526 07:13:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:23:59.094 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.094 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:59.094 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.094 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.094 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.094 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:59.094 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:59.094 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.352 07:13:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.610 00:23:59.610 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:59.610 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:59.610 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.868 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.868 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:59.868 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.868 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.868 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.868 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:59.868 { 00:23:59.868 "cntlid": 27, 00:23:59.868 "qid": 0, 00:23:59.868 "state": "enabled", 00:23:59.868 "thread": "nvmf_tgt_poll_group_000", 00:23:59.868 "listen_address": { 00:23:59.869 "trtype": "RDMA", 00:23:59.869 "adrfam": "IPv4", 00:23:59.869 "traddr": "192.168.100.8", 00:23:59.869 "trsvcid": "4420" 00:23:59.869 }, 00:23:59.869 "peer_address": { 00:23:59.869 "trtype": "RDMA", 00:23:59.869 "adrfam": "IPv4", 00:23:59.869 "traddr": "192.168.100.8", 00:23:59.869 "trsvcid": "45055" 00:23:59.869 }, 00:23:59.869 "auth": { 00:23:59.869 "state": "completed", 00:23:59.869 "digest": "sha256", 00:23:59.869 "dhgroup": "ffdhe4096" 00:23:59.869 } 00:23:59.869 } 00:23:59.869 ]' 00:23:59.869 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:59.869 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:59.869 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:59.869 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:59.869 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:59.869 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:59.869 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.869 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.127 07:13:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:24:00.694 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.953 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.212 00:24:01.212 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:01.212 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:01.212 07:13:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.471 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.471 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:01.471 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.471 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.471 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.471 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:01.471 { 00:24:01.471 "cntlid": 29, 00:24:01.471 "qid": 0, 00:24:01.471 "state": "enabled", 00:24:01.471 "thread": "nvmf_tgt_poll_group_000", 00:24:01.471 "listen_address": { 00:24:01.471 "trtype": "RDMA", 00:24:01.471 "adrfam": "IPv4", 00:24:01.471 "traddr": "192.168.100.8", 00:24:01.471 "trsvcid": "4420" 00:24:01.471 }, 00:24:01.471 "peer_address": { 00:24:01.471 "trtype": "RDMA", 00:24:01.471 "adrfam": "IPv4", 00:24:01.471 "traddr": "192.168.100.8", 00:24:01.471 "trsvcid": "60650" 00:24:01.471 }, 00:24:01.471 "auth": { 00:24:01.471 "state": "completed", 00:24:01.471 "digest": "sha256", 00:24:01.471 "dhgroup": "ffdhe4096" 00:24:01.471 } 00:24:01.471 } 00:24:01.471 ]' 00:24:01.471 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:01.472 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:01.472 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:01.472 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:01.472 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:01.731 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.731 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.731 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:01.731 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:24:02.299 07:13:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:02.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:02.559 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:02.559 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.559 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.559 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.559 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:02.559 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.559 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:02.818 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:03.078 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:03.078 { 00:24:03.078 "cntlid": 31, 00:24:03.078 "qid": 0, 00:24:03.078 "state": "enabled", 00:24:03.078 "thread": "nvmf_tgt_poll_group_000", 00:24:03.078 "listen_address": { 00:24:03.078 "trtype": "RDMA", 00:24:03.078 "adrfam": "IPv4", 00:24:03.078 "traddr": "192.168.100.8", 00:24:03.078 "trsvcid": "4420" 00:24:03.078 }, 00:24:03.078 "peer_address": { 00:24:03.078 "trtype": "RDMA", 00:24:03.078 "adrfam": "IPv4", 00:24:03.078 "traddr": "192.168.100.8", 00:24:03.078 "trsvcid": "52936" 00:24:03.078 }, 00:24:03.078 "auth": { 00:24:03.078 "state": "completed", 00:24:03.078 "digest": "sha256", 00:24:03.078 "dhgroup": "ffdhe4096" 00:24:03.078 } 00:24:03.078 } 00:24:03.078 ]' 00:24:03.078 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:03.337 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:03.337 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:03.337 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:03.337 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:03.337 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.337 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.337 07:13:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:03.596 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:24:04.165 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.165 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:04.165 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.165 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.165 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.165 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:04.165 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:04.165 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:04.165 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.425 07:13:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.685 00:24:04.685 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:04.685 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:04.685 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:04.944 { 00:24:04.944 "cntlid": 33, 00:24:04.944 "qid": 0, 00:24:04.944 "state": "enabled", 00:24:04.944 "thread": "nvmf_tgt_poll_group_000", 00:24:04.944 "listen_address": { 00:24:04.944 "trtype": "RDMA", 00:24:04.944 "adrfam": "IPv4", 00:24:04.944 "traddr": "192.168.100.8", 00:24:04.944 "trsvcid": "4420" 00:24:04.944 }, 00:24:04.944 "peer_address": { 00:24:04.944 "trtype": "RDMA", 00:24:04.944 "adrfam": "IPv4", 00:24:04.944 "traddr": "192.168.100.8", 00:24:04.944 "trsvcid": "51363" 00:24:04.944 }, 00:24:04.944 "auth": { 00:24:04.944 "state": "completed", 00:24:04.944 "digest": "sha256", 00:24:04.944 "dhgroup": "ffdhe6144" 00:24:04.944 } 00:24:04.944 } 00:24:04.944 ]' 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:04.944 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:05.203 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:05.203 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:05.203 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:05.203 07:13:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:24:05.771 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:06.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:06.031 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:06.031 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.031 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.031 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.031 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:06.031 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:06.031 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.292 07:13:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.584 00:24:06.584 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:06.584 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:06.584 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.584 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.584 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.584 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.584 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:06.843 { 00:24:06.843 "cntlid": 35, 00:24:06.843 "qid": 0, 00:24:06.843 "state": "enabled", 00:24:06.843 "thread": "nvmf_tgt_poll_group_000", 00:24:06.843 "listen_address": { 00:24:06.843 "trtype": "RDMA", 00:24:06.843 "adrfam": "IPv4", 00:24:06.843 "traddr": "192.168.100.8", 00:24:06.843 "trsvcid": "4420" 00:24:06.843 }, 00:24:06.843 "peer_address": { 00:24:06.843 "trtype": "RDMA", 00:24:06.843 "adrfam": "IPv4", 00:24:06.843 "traddr": "192.168.100.8", 00:24:06.843 "trsvcid": "56263" 00:24:06.843 }, 00:24:06.843 "auth": { 00:24:06.843 "state": "completed", 00:24:06.843 "digest": "sha256", 00:24:06.843 "dhgroup": "ffdhe6144" 00:24:06.843 } 00:24:06.843 } 00:24:06.843 ]' 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.843 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:07.103 07:13:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:24:07.671 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.671 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:07.671 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.671 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.671 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.671 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:07.671 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:07.671 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.930 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.189 00:24:08.189 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:08.189 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:08.189 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.449 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.449 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.449 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.449 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.449 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.449 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:08.449 { 00:24:08.449 "cntlid": 37, 00:24:08.449 "qid": 0, 00:24:08.449 "state": "enabled", 00:24:08.449 "thread": "nvmf_tgt_poll_group_000", 00:24:08.449 "listen_address": { 00:24:08.449 "trtype": "RDMA", 00:24:08.449 "adrfam": "IPv4", 00:24:08.449 "traddr": "192.168.100.8", 00:24:08.449 "trsvcid": "4420" 00:24:08.449 }, 00:24:08.449 "peer_address": { 00:24:08.449 "trtype": "RDMA", 00:24:08.449 "adrfam": "IPv4", 00:24:08.449 "traddr": "192.168.100.8", 00:24:08.449 "trsvcid": "58786" 00:24:08.449 }, 00:24:08.449 "auth": { 00:24:08.449 "state": "completed", 00:24:08.449 "digest": "sha256", 00:24:08.449 "dhgroup": "ffdhe6144" 00:24:08.449 } 00:24:08.449 } 00:24:08.449 ]' 00:24:08.449 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:08.449 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:08.449 07:13:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:08.449 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:08.449 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:08.449 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.449 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.449 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.708 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:24:09.275 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:09.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:09.533 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:09.533 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.533 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.533 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.533 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:09.533 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:09.533 07:13:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:09.792 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:10.051 00:24:10.051 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:10.051 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:10.051 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:10.310 { 00:24:10.310 "cntlid": 39, 00:24:10.310 "qid": 0, 00:24:10.310 "state": "enabled", 00:24:10.310 "thread": "nvmf_tgt_poll_group_000", 00:24:10.310 "listen_address": { 00:24:10.310 "trtype": "RDMA", 00:24:10.310 "adrfam": "IPv4", 00:24:10.310 "traddr": "192.168.100.8", 00:24:10.310 "trsvcid": "4420" 00:24:10.310 }, 00:24:10.310 "peer_address": { 00:24:10.310 "trtype": "RDMA", 00:24:10.310 "adrfam": "IPv4", 00:24:10.310 "traddr": "192.168.100.8", 00:24:10.310 "trsvcid": "34912" 00:24:10.310 }, 00:24:10.310 "auth": { 00:24:10.310 "state": "completed", 00:24:10.310 "digest": "sha256", 00:24:10.310 "dhgroup": "ffdhe6144" 00:24:10.310 } 00:24:10.310 } 00:24:10.310 ]' 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.310 07:13:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.568 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:24:11.136 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:11.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:11.136 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:11.136 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.136 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.136 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.136 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.136 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:11.136 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:11.136 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.395 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.396 07:13:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.963 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:11.963 { 00:24:11.963 "cntlid": 41, 00:24:11.963 "qid": 0, 00:24:11.963 "state": "enabled", 00:24:11.963 "thread": "nvmf_tgt_poll_group_000", 00:24:11.963 "listen_address": { 00:24:11.963 "trtype": "RDMA", 00:24:11.963 "adrfam": "IPv4", 00:24:11.963 "traddr": "192.168.100.8", 00:24:11.963 "trsvcid": "4420" 00:24:11.963 }, 00:24:11.963 "peer_address": { 00:24:11.963 "trtype": "RDMA", 00:24:11.963 "adrfam": "IPv4", 00:24:11.963 "traddr": "192.168.100.8", 00:24:11.963 "trsvcid": "46554" 00:24:11.963 }, 00:24:11.963 "auth": { 00:24:11.963 "state": "completed", 00:24:11.963 "digest": "sha256", 00:24:11.963 "dhgroup": "ffdhe8192" 00:24:11.963 } 00:24:11.963 } 00:24:11.963 ]' 00:24:11.963 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:12.222 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:12.222 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:12.222 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:12.222 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:12.223 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:12.223 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:12.223 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.482 07:13:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:24:13.049 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:13.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:13.049 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:13.049 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.049 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.049 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.049 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:13.049 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:13.049 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:13.308 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:24:13.308 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:13.308 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:13.308 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:13.308 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:13.308 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.309 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.309 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.309 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.309 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.309 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.309 07:13:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.877 00:24:13.877 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:13.877 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:13.877 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.877 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.877 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:13.877 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.877 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.877 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.877 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:13.877 { 00:24:13.877 "cntlid": 43, 00:24:13.877 "qid": 0, 00:24:13.877 "state": "enabled", 00:24:13.877 "thread": "nvmf_tgt_poll_group_000", 00:24:13.877 "listen_address": { 00:24:13.877 "trtype": "RDMA", 00:24:13.877 "adrfam": "IPv4", 00:24:13.877 "traddr": "192.168.100.8", 00:24:13.877 "trsvcid": "4420" 00:24:13.877 }, 00:24:13.877 "peer_address": { 00:24:13.877 "trtype": "RDMA", 00:24:13.877 "adrfam": "IPv4", 00:24:13.878 "traddr": "192.168.100.8", 00:24:13.878 "trsvcid": "51329" 00:24:13.878 }, 00:24:13.878 "auth": { 00:24:13.878 "state": "completed", 00:24:13.878 "digest": "sha256", 00:24:13.878 "dhgroup": "ffdhe8192" 00:24:13.878 } 00:24:13.878 } 00:24:13.878 ]' 00:24:13.878 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:13.878 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:13.878 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:14.137 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:14.137 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:14.137 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:14.137 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.137 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.137 07:13:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:24:14.705 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.965 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:14.965 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.965 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.965 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.965 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:14.965 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.965 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.224 07:13:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.482 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:15.742 { 00:24:15.742 "cntlid": 45, 00:24:15.742 "qid": 0, 00:24:15.742 "state": "enabled", 00:24:15.742 "thread": "nvmf_tgt_poll_group_000", 00:24:15.742 "listen_address": { 00:24:15.742 "trtype": "RDMA", 00:24:15.742 "adrfam": "IPv4", 00:24:15.742 "traddr": "192.168.100.8", 00:24:15.742 "trsvcid": "4420" 00:24:15.742 }, 00:24:15.742 "peer_address": { 00:24:15.742 "trtype": "RDMA", 00:24:15.742 "adrfam": "IPv4", 00:24:15.742 "traddr": "192.168.100.8", 00:24:15.742 "trsvcid": "59540" 00:24:15.742 }, 00:24:15.742 "auth": { 00:24:15.742 "state": "completed", 00:24:15.742 "digest": "sha256", 00:24:15.742 "dhgroup": "ffdhe8192" 00:24:15.742 } 00:24:15.742 } 00:24:15.742 ]' 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:15.742 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:16.001 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:16.001 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:16.001 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:16.001 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:16.001 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.001 07:13:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:24:16.568 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.827 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:16.827 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.827 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.827 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.827 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:16.827 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.827 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:17.086 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:17.345 00:24:17.345 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:17.345 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:17.345 07:13:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.604 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.604 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.604 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.604 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.604 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.604 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:17.604 { 00:24:17.604 "cntlid": 47, 00:24:17.604 "qid": 0, 00:24:17.604 "state": "enabled", 00:24:17.604 "thread": "nvmf_tgt_poll_group_000", 00:24:17.604 "listen_address": { 00:24:17.604 "trtype": "RDMA", 00:24:17.604 "adrfam": "IPv4", 00:24:17.604 "traddr": "192.168.100.8", 00:24:17.604 "trsvcid": "4420" 00:24:17.604 }, 00:24:17.604 "peer_address": { 00:24:17.604 "trtype": "RDMA", 00:24:17.604 "adrfam": "IPv4", 00:24:17.604 "traddr": "192.168.100.8", 00:24:17.604 "trsvcid": "34810" 00:24:17.604 }, 00:24:17.604 "auth": { 00:24:17.604 "state": "completed", 00:24:17.604 "digest": "sha256", 00:24:17.604 "dhgroup": "ffdhe8192" 00:24:17.604 } 00:24:17.604 } 00:24:17.604 ]' 00:24:17.604 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:17.604 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:17.604 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:17.862 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:17.862 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:17.862 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.862 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.862 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:17.863 07:13:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.800 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.059 00:24:19.059 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:19.059 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:19.059 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:19.317 { 00:24:19.317 "cntlid": 49, 00:24:19.317 "qid": 0, 00:24:19.317 "state": "enabled", 00:24:19.317 "thread": "nvmf_tgt_poll_group_000", 00:24:19.317 "listen_address": { 00:24:19.317 "trtype": "RDMA", 00:24:19.317 "adrfam": "IPv4", 00:24:19.317 "traddr": "192.168.100.8", 00:24:19.317 "trsvcid": "4420" 00:24:19.317 }, 00:24:19.317 "peer_address": { 00:24:19.317 "trtype": "RDMA", 00:24:19.317 "adrfam": "IPv4", 00:24:19.317 "traddr": "192.168.100.8", 00:24:19.317 "trsvcid": "36508" 00:24:19.317 }, 00:24:19.317 "auth": { 00:24:19.317 "state": "completed", 00:24:19.317 "digest": "sha384", 00:24:19.317 "dhgroup": "null" 00:24:19.317 } 00:24:19.317 } 00:24:19.317 ]' 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.317 07:13:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.610 07:13:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:24:20.202 07:13:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.461 07:13:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:20.461 07:13:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.461 07:13:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.461 07:13:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.461 07:13:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:20.461 07:13:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:20.461 07:13:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.461 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.720 00:24:20.720 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:20.720 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:20.720 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:20.979 { 00:24:20.979 "cntlid": 51, 00:24:20.979 "qid": 0, 00:24:20.979 "state": "enabled", 00:24:20.979 "thread": "nvmf_tgt_poll_group_000", 00:24:20.979 "listen_address": { 00:24:20.979 "trtype": "RDMA", 00:24:20.979 "adrfam": "IPv4", 00:24:20.979 "traddr": "192.168.100.8", 00:24:20.979 "trsvcid": "4420" 00:24:20.979 }, 00:24:20.979 "peer_address": { 00:24:20.979 "trtype": "RDMA", 00:24:20.979 "adrfam": "IPv4", 00:24:20.979 "traddr": "192.168.100.8", 00:24:20.979 "trsvcid": "49967" 00:24:20.979 }, 00:24:20.979 "auth": { 00:24:20.979 "state": "completed", 00:24:20.979 "digest": "sha384", 00:24:20.979 "dhgroup": "null" 00:24:20.979 } 00:24:20.979 } 00:24:20.979 ]' 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:20.979 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:21.238 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:21.238 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:21.238 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.238 07:13:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:24:21.807 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:22.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.066 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.325 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.325 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.325 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.325 00:24:22.325 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:22.325 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:22.325 07:13:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:22.584 { 00:24:22.584 "cntlid": 53, 00:24:22.584 "qid": 0, 00:24:22.584 "state": "enabled", 00:24:22.584 "thread": "nvmf_tgt_poll_group_000", 00:24:22.584 "listen_address": { 00:24:22.584 "trtype": "RDMA", 00:24:22.584 "adrfam": "IPv4", 00:24:22.584 "traddr": "192.168.100.8", 00:24:22.584 "trsvcid": "4420" 00:24:22.584 }, 00:24:22.584 "peer_address": { 00:24:22.584 "trtype": "RDMA", 00:24:22.584 "adrfam": "IPv4", 00:24:22.584 "traddr": "192.168.100.8", 00:24:22.584 "trsvcid": "39340" 00:24:22.584 }, 00:24:22.584 "auth": { 00:24:22.584 "state": "completed", 00:24:22.584 "digest": "sha384", 00:24:22.584 "dhgroup": "null" 00:24:22.584 } 00:24:22.584 } 00:24:22.584 ]' 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:22.584 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:22.844 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:22.844 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:22.844 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:22.844 07:13:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:24:23.413 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:23.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:23.672 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:23.672 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.672 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.672 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.672 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:23.672 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:23.672 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:23.932 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:24.191 00:24:24.191 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:24.191 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:24.191 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.191 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.191 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:24.191 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.191 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.191 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.191 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:24.191 { 00:24:24.191 "cntlid": 55, 00:24:24.191 "qid": 0, 00:24:24.191 "state": "enabled", 00:24:24.191 "thread": "nvmf_tgt_poll_group_000", 00:24:24.191 "listen_address": { 00:24:24.191 "trtype": "RDMA", 00:24:24.191 "adrfam": "IPv4", 00:24:24.191 "traddr": "192.168.100.8", 00:24:24.191 "trsvcid": "4420" 00:24:24.191 }, 00:24:24.191 "peer_address": { 00:24:24.191 "trtype": "RDMA", 00:24:24.191 "adrfam": "IPv4", 00:24:24.191 "traddr": "192.168.100.8", 00:24:24.191 "trsvcid": "35102" 00:24:24.192 }, 00:24:24.192 "auth": { 00:24:24.192 "state": "completed", 00:24:24.192 "digest": "sha384", 00:24:24.192 "dhgroup": "null" 00:24:24.192 } 00:24:24.192 } 00:24:24.192 ]' 00:24:24.192 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:24.451 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:24.451 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:24.451 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:24.451 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:24.451 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:24.451 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:24.451 07:13:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:24.710 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:24:25.278 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:25.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:25.278 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:25.278 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.278 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.278 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.278 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.278 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:25.278 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:25.278 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.537 07:13:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.796 00:24:25.796 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:25.797 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:25.797 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:25.797 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.797 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.797 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.797 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:26.056 { 00:24:26.056 "cntlid": 57, 00:24:26.056 "qid": 0, 00:24:26.056 "state": "enabled", 00:24:26.056 "thread": "nvmf_tgt_poll_group_000", 00:24:26.056 "listen_address": { 00:24:26.056 "trtype": "RDMA", 00:24:26.056 "adrfam": "IPv4", 00:24:26.056 "traddr": "192.168.100.8", 00:24:26.056 "trsvcid": "4420" 00:24:26.056 }, 00:24:26.056 "peer_address": { 00:24:26.056 "trtype": "RDMA", 00:24:26.056 "adrfam": "IPv4", 00:24:26.056 "traddr": "192.168.100.8", 00:24:26.056 "trsvcid": "38405" 00:24:26.056 }, 00:24:26.056 "auth": { 00:24:26.056 "state": "completed", 00:24:26.056 "digest": "sha384", 00:24:26.056 "dhgroup": "ffdhe2048" 00:24:26.056 } 00:24:26.056 } 00:24:26.056 ]' 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:26.056 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:26.315 07:13:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:24:26.883 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:26.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:26.883 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:26.883 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.883 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.883 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.883 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:26.883 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:26.883 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.143 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.402 00:24:27.402 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:27.402 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:27.402 07:13:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:27.662 { 00:24:27.662 "cntlid": 59, 00:24:27.662 "qid": 0, 00:24:27.662 "state": "enabled", 00:24:27.662 "thread": "nvmf_tgt_poll_group_000", 00:24:27.662 "listen_address": { 00:24:27.662 "trtype": "RDMA", 00:24:27.662 "adrfam": "IPv4", 00:24:27.662 "traddr": "192.168.100.8", 00:24:27.662 "trsvcid": "4420" 00:24:27.662 }, 00:24:27.662 "peer_address": { 00:24:27.662 "trtype": "RDMA", 00:24:27.662 "adrfam": "IPv4", 00:24:27.662 "traddr": "192.168.100.8", 00:24:27.662 "trsvcid": "58827" 00:24:27.662 }, 00:24:27.662 "auth": { 00:24:27.662 "state": "completed", 00:24:27.662 "digest": "sha384", 00:24:27.662 "dhgroup": "ffdhe2048" 00:24:27.662 } 00:24:27.662 } 00:24:27.662 ]' 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.662 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.921 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:24:28.489 07:13:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:28.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:28.489 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:28.489 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.489 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.489 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.489 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:28.489 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:28.489 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.748 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.006 00:24:29.006 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:29.006 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:29.006 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:29.264 { 00:24:29.264 "cntlid": 61, 00:24:29.264 "qid": 0, 00:24:29.264 "state": "enabled", 00:24:29.264 "thread": "nvmf_tgt_poll_group_000", 00:24:29.264 "listen_address": { 00:24:29.264 "trtype": "RDMA", 00:24:29.264 "adrfam": "IPv4", 00:24:29.264 "traddr": "192.168.100.8", 00:24:29.264 "trsvcid": "4420" 00:24:29.264 }, 00:24:29.264 "peer_address": { 00:24:29.264 "trtype": "RDMA", 00:24:29.264 "adrfam": "IPv4", 00:24:29.264 "traddr": "192.168.100.8", 00:24:29.264 "trsvcid": "41081" 00:24:29.264 }, 00:24:29.264 "auth": { 00:24:29.264 "state": "completed", 00:24:29.264 "digest": "sha384", 00:24:29.264 "dhgroup": "ffdhe2048" 00:24:29.264 } 00:24:29.264 } 00:24:29.264 ]' 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:29.264 07:13:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:29.523 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:24:30.091 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:30.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:30.350 07:13:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:30.609 00:24:30.609 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:30.609 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:30.609 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:30.868 { 00:24:30.868 "cntlid": 63, 00:24:30.868 "qid": 0, 00:24:30.868 "state": "enabled", 00:24:30.868 "thread": "nvmf_tgt_poll_group_000", 00:24:30.868 "listen_address": { 00:24:30.868 "trtype": "RDMA", 00:24:30.868 "adrfam": "IPv4", 00:24:30.868 "traddr": "192.168.100.8", 00:24:30.868 "trsvcid": "4420" 00:24:30.868 }, 00:24:30.868 "peer_address": { 00:24:30.868 "trtype": "RDMA", 00:24:30.868 "adrfam": "IPv4", 00:24:30.868 "traddr": "192.168.100.8", 00:24:30.868 "trsvcid": "55245" 00:24:30.868 }, 00:24:30.868 "auth": { 00:24:30.868 "state": "completed", 00:24:30.868 "digest": "sha384", 00:24:30.868 "dhgroup": "ffdhe2048" 00:24:30.868 } 00:24:30.868 } 00:24:30.868 ]' 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:30.868 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:31.127 07:13:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:24:31.693 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.952 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:31.952 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.952 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.952 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.952 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.952 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:31.952 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:31.952 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.212 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.473 00:24:32.473 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:32.473 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:32.473 07:13:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:32.473 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.473 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:32.473 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.473 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.473 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.473 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:32.473 { 00:24:32.473 "cntlid": 65, 00:24:32.473 "qid": 0, 00:24:32.473 "state": "enabled", 00:24:32.473 "thread": "nvmf_tgt_poll_group_000", 00:24:32.473 "listen_address": { 00:24:32.473 "trtype": "RDMA", 00:24:32.473 "adrfam": "IPv4", 00:24:32.473 "traddr": "192.168.100.8", 00:24:32.473 "trsvcid": "4420" 00:24:32.473 }, 00:24:32.473 "peer_address": { 00:24:32.473 "trtype": "RDMA", 00:24:32.473 "adrfam": "IPv4", 00:24:32.473 "traddr": "192.168.100.8", 00:24:32.473 "trsvcid": "49887" 00:24:32.473 }, 00:24:32.473 "auth": { 00:24:32.473 "state": "completed", 00:24:32.473 "digest": "sha384", 00:24:32.473 "dhgroup": "ffdhe3072" 00:24:32.473 } 00:24:32.473 } 00:24:32.473 ]' 00:24:32.473 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:32.473 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:32.473 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:32.772 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:32.772 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:32.772 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.772 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.772 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.772 07:13:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:33.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.723 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.982 00:24:33.982 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:33.982 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:33.982 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:34.242 { 00:24:34.242 "cntlid": 67, 00:24:34.242 "qid": 0, 00:24:34.242 "state": "enabled", 00:24:34.242 "thread": "nvmf_tgt_poll_group_000", 00:24:34.242 "listen_address": { 00:24:34.242 "trtype": "RDMA", 00:24:34.242 "adrfam": "IPv4", 00:24:34.242 "traddr": "192.168.100.8", 00:24:34.242 "trsvcid": "4420" 00:24:34.242 }, 00:24:34.242 "peer_address": { 00:24:34.242 "trtype": "RDMA", 00:24:34.242 "adrfam": "IPv4", 00:24:34.242 "traddr": "192.168.100.8", 00:24:34.242 "trsvcid": "43193" 00:24:34.242 }, 00:24:34.242 "auth": { 00:24:34.242 "state": "completed", 00:24:34.242 "digest": "sha384", 00:24:34.242 "dhgroup": "ffdhe3072" 00:24:34.242 } 00:24:34.242 } 00:24:34.242 ]' 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:34.242 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:34.501 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:34.501 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:34.501 07:13:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.501 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:35.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.438 07:13:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.438 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.438 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.438 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.698 00:24:35.698 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:35.698 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:35.698 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:35.957 { 00:24:35.957 "cntlid": 69, 00:24:35.957 "qid": 0, 00:24:35.957 "state": "enabled", 00:24:35.957 "thread": "nvmf_tgt_poll_group_000", 00:24:35.957 "listen_address": { 00:24:35.957 "trtype": "RDMA", 00:24:35.957 "adrfam": "IPv4", 00:24:35.957 "traddr": "192.168.100.8", 00:24:35.957 "trsvcid": "4420" 00:24:35.957 }, 00:24:35.957 "peer_address": { 00:24:35.957 "trtype": "RDMA", 00:24:35.957 "adrfam": "IPv4", 00:24:35.957 "traddr": "192.168.100.8", 00:24:35.957 "trsvcid": "59678" 00:24:35.957 }, 00:24:35.957 "auth": { 00:24:35.957 "state": "completed", 00:24:35.957 "digest": "sha384", 00:24:35.957 "dhgroup": "ffdhe3072" 00:24:35.957 } 00:24:35.957 } 00:24:35.957 ]' 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:35.957 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:36.216 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:36.216 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:36.216 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:36.216 07:13:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:24:36.784 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:37.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.043 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.302 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.302 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:37.302 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:37.302 00:24:37.302 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:37.302 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:37.302 07:13:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:37.561 { 00:24:37.561 "cntlid": 71, 00:24:37.561 "qid": 0, 00:24:37.561 "state": "enabled", 00:24:37.561 "thread": "nvmf_tgt_poll_group_000", 00:24:37.561 "listen_address": { 00:24:37.561 "trtype": "RDMA", 00:24:37.561 "adrfam": "IPv4", 00:24:37.561 "traddr": "192.168.100.8", 00:24:37.561 "trsvcid": "4420" 00:24:37.561 }, 00:24:37.561 "peer_address": { 00:24:37.561 "trtype": "RDMA", 00:24:37.561 "adrfam": "IPv4", 00:24:37.561 "traddr": "192.168.100.8", 00:24:37.561 "trsvcid": "44742" 00:24:37.561 }, 00:24:37.561 "auth": { 00:24:37.561 "state": "completed", 00:24:37.561 "digest": "sha384", 00:24:37.561 "dhgroup": "ffdhe3072" 00:24:37.561 } 00:24:37.561 } 00:24:37.561 ]' 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:37.561 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:37.820 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:37.820 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:37.820 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.820 07:13:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:38.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.754 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.012 00:24:39.012 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:39.012 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:39.012 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:39.270 { 00:24:39.270 "cntlid": 73, 00:24:39.270 "qid": 0, 00:24:39.270 "state": "enabled", 00:24:39.270 "thread": "nvmf_tgt_poll_group_000", 00:24:39.270 "listen_address": { 00:24:39.270 "trtype": "RDMA", 00:24:39.270 "adrfam": "IPv4", 00:24:39.270 "traddr": "192.168.100.8", 00:24:39.270 "trsvcid": "4420" 00:24:39.270 }, 00:24:39.270 "peer_address": { 00:24:39.270 "trtype": "RDMA", 00:24:39.270 "adrfam": "IPv4", 00:24:39.270 "traddr": "192.168.100.8", 00:24:39.270 "trsvcid": "55430" 00:24:39.270 }, 00:24:39.270 "auth": { 00:24:39.270 "state": "completed", 00:24:39.270 "digest": "sha384", 00:24:39.270 "dhgroup": "ffdhe4096" 00:24:39.270 } 00:24:39.270 } 00:24:39.270 ]' 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:39.270 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:39.529 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:39.529 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:39.529 07:13:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:39.529 07:13:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:24:40.096 07:13:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:40.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:40.354 07:13:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:40.354 07:13:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.354 07:13:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.354 07:13:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.354 07:13:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:40.354 07:13:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:40.354 07:13:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.613 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.871 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:40.872 { 00:24:40.872 "cntlid": 75, 00:24:40.872 "qid": 0, 00:24:40.872 "state": "enabled", 00:24:40.872 "thread": "nvmf_tgt_poll_group_000", 00:24:40.872 "listen_address": { 00:24:40.872 "trtype": "RDMA", 00:24:40.872 "adrfam": "IPv4", 00:24:40.872 "traddr": "192.168.100.8", 00:24:40.872 "trsvcid": "4420" 00:24:40.872 }, 00:24:40.872 "peer_address": { 00:24:40.872 "trtype": "RDMA", 00:24:40.872 "adrfam": "IPv4", 00:24:40.872 "traddr": "192.168.100.8", 00:24:40.872 "trsvcid": "43098" 00:24:40.872 }, 00:24:40.872 "auth": { 00:24:40.872 "state": "completed", 00:24:40.872 "digest": "sha384", 00:24:40.872 "dhgroup": "ffdhe4096" 00:24:40.872 } 00:24:40.872 } 00:24:40.872 ]' 00:24:40.872 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:41.130 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:41.130 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:41.130 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:41.130 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:41.130 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:41.130 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:41.130 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:41.389 07:13:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:24:41.956 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:41.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:41.956 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:41.956 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.956 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.956 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.956 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:41.956 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:41.956 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.215 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.473 00:24:42.473 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:42.473 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:42.473 07:13:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:42.731 { 00:24:42.731 "cntlid": 77, 00:24:42.731 "qid": 0, 00:24:42.731 "state": "enabled", 00:24:42.731 "thread": "nvmf_tgt_poll_group_000", 00:24:42.731 "listen_address": { 00:24:42.731 "trtype": "RDMA", 00:24:42.731 "adrfam": "IPv4", 00:24:42.731 "traddr": "192.168.100.8", 00:24:42.731 "trsvcid": "4420" 00:24:42.731 }, 00:24:42.731 "peer_address": { 00:24:42.731 "trtype": "RDMA", 00:24:42.731 "adrfam": "IPv4", 00:24:42.731 "traddr": "192.168.100.8", 00:24:42.731 "trsvcid": "33402" 00:24:42.731 }, 00:24:42.731 "auth": { 00:24:42.731 "state": "completed", 00:24:42.731 "digest": "sha384", 00:24:42.731 "dhgroup": "ffdhe4096" 00:24:42.731 } 00:24:42.731 } 00:24:42.731 ]' 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:42.731 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:42.989 07:13:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:24:43.555 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:43.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:43.814 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:44.073 00:24:44.073 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:44.073 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:44.073 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:44.332 { 00:24:44.332 "cntlid": 79, 00:24:44.332 "qid": 0, 00:24:44.332 "state": "enabled", 00:24:44.332 "thread": "nvmf_tgt_poll_group_000", 00:24:44.332 "listen_address": { 00:24:44.332 "trtype": "RDMA", 00:24:44.332 "adrfam": "IPv4", 00:24:44.332 "traddr": "192.168.100.8", 00:24:44.332 "trsvcid": "4420" 00:24:44.332 }, 00:24:44.332 "peer_address": { 00:24:44.332 "trtype": "RDMA", 00:24:44.332 "adrfam": "IPv4", 00:24:44.332 "traddr": "192.168.100.8", 00:24:44.332 "trsvcid": "45543" 00:24:44.332 }, 00:24:44.332 "auth": { 00:24:44.332 "state": "completed", 00:24:44.332 "digest": "sha384", 00:24:44.332 "dhgroup": "ffdhe4096" 00:24:44.332 } 00:24:44.332 } 00:24:44.332 ]' 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:44.332 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:44.591 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:44.591 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:44.591 07:13:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.591 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:24:45.159 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:45.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:45.418 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:45.418 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.418 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.418 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.418 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.418 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:45.418 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:45.418 07:13:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:45.676 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.677 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.978 00:24:45.978 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:45.978 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:45.978 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:46.238 { 00:24:46.238 "cntlid": 81, 00:24:46.238 "qid": 0, 00:24:46.238 "state": "enabled", 00:24:46.238 "thread": "nvmf_tgt_poll_group_000", 00:24:46.238 "listen_address": { 00:24:46.238 "trtype": "RDMA", 00:24:46.238 "adrfam": "IPv4", 00:24:46.238 "traddr": "192.168.100.8", 00:24:46.238 "trsvcid": "4420" 00:24:46.238 }, 00:24:46.238 "peer_address": { 00:24:46.238 "trtype": "RDMA", 00:24:46.238 "adrfam": "IPv4", 00:24:46.238 "traddr": "192.168.100.8", 00:24:46.238 "trsvcid": "38332" 00:24:46.238 }, 00:24:46.238 "auth": { 00:24:46.238 "state": "completed", 00:24:46.238 "digest": "sha384", 00:24:46.238 "dhgroup": "ffdhe6144" 00:24:46.238 } 00:24:46.238 } 00:24:46.238 ]' 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:46.238 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.498 07:14:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:24:47.066 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:47.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:47.066 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:47.066 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.066 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.066 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.066 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:47.066 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:47.066 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.325 07:14:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.584 00:24:47.584 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:47.584 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:47.584 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:47.843 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.843 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:47.843 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.843 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.843 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.843 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:47.843 { 00:24:47.843 "cntlid": 83, 00:24:47.843 "qid": 0, 00:24:47.843 "state": "enabled", 00:24:47.843 "thread": "nvmf_tgt_poll_group_000", 00:24:47.843 "listen_address": { 00:24:47.843 "trtype": "RDMA", 00:24:47.843 "adrfam": "IPv4", 00:24:47.843 "traddr": "192.168.100.8", 00:24:47.843 "trsvcid": "4420" 00:24:47.843 }, 00:24:47.843 "peer_address": { 00:24:47.843 "trtype": "RDMA", 00:24:47.843 "adrfam": "IPv4", 00:24:47.843 "traddr": "192.168.100.8", 00:24:47.843 "trsvcid": "49690" 00:24:47.843 }, 00:24:47.843 "auth": { 00:24:47.843 "state": "completed", 00:24:47.843 "digest": "sha384", 00:24:47.843 "dhgroup": "ffdhe6144" 00:24:47.843 } 00:24:47.843 } 00:24:47.843 ]' 00:24:47.843 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:47.843 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:47.843 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:48.102 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:48.102 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:48.102 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:48.102 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:48.102 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:48.102 07:14:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:24:48.671 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:48.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:48.929 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:48.929 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.929 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.929 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.929 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:48.929 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:48.929 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.187 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.445 00:24:49.445 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:49.445 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:49.445 07:14:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:49.702 { 00:24:49.702 "cntlid": 85, 00:24:49.702 "qid": 0, 00:24:49.702 "state": "enabled", 00:24:49.702 "thread": "nvmf_tgt_poll_group_000", 00:24:49.702 "listen_address": { 00:24:49.702 "trtype": "RDMA", 00:24:49.702 "adrfam": "IPv4", 00:24:49.702 "traddr": "192.168.100.8", 00:24:49.702 "trsvcid": "4420" 00:24:49.702 }, 00:24:49.702 "peer_address": { 00:24:49.702 "trtype": "RDMA", 00:24:49.702 "adrfam": "IPv4", 00:24:49.702 "traddr": "192.168.100.8", 00:24:49.702 "trsvcid": "36518" 00:24:49.702 }, 00:24:49.702 "auth": { 00:24:49.702 "state": "completed", 00:24:49.702 "digest": "sha384", 00:24:49.702 "dhgroup": "ffdhe6144" 00:24:49.702 } 00:24:49.702 } 00:24:49.702 ]' 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:49.702 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:49.959 07:14:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:24:50.524 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:50.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:50.782 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:50.783 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:50.783 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:50.783 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.783 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.783 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.783 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:50.783 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:51.348 00:24:51.348 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:51.348 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:51.348 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:51.348 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.348 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:51.348 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.348 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.348 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.348 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:51.348 { 00:24:51.348 "cntlid": 87, 00:24:51.348 "qid": 0, 00:24:51.349 "state": "enabled", 00:24:51.349 "thread": "nvmf_tgt_poll_group_000", 00:24:51.349 "listen_address": { 00:24:51.349 "trtype": "RDMA", 00:24:51.349 "adrfam": "IPv4", 00:24:51.349 "traddr": "192.168.100.8", 00:24:51.349 "trsvcid": "4420" 00:24:51.349 }, 00:24:51.349 "peer_address": { 00:24:51.349 "trtype": "RDMA", 00:24:51.349 "adrfam": "IPv4", 00:24:51.349 "traddr": "192.168.100.8", 00:24:51.349 "trsvcid": "55760" 00:24:51.349 }, 00:24:51.349 "auth": { 00:24:51.349 "state": "completed", 00:24:51.349 "digest": "sha384", 00:24:51.349 "dhgroup": "ffdhe6144" 00:24:51.349 } 00:24:51.349 } 00:24:51.349 ]' 00:24:51.349 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:51.349 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:51.349 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:51.349 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:51.349 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:51.607 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:51.607 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:51.607 07:14:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:51.607 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:24:52.172 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:52.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:52.430 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:52.430 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.430 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.430 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.430 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.430 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:52.430 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:52.430 07:14:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.688 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.946 00:24:52.946 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:52.946 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:52.946 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:53.204 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.204 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:53.204 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.204 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.204 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.205 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:53.205 { 00:24:53.205 "cntlid": 89, 00:24:53.205 "qid": 0, 00:24:53.205 "state": "enabled", 00:24:53.205 "thread": "nvmf_tgt_poll_group_000", 00:24:53.205 "listen_address": { 00:24:53.205 "trtype": "RDMA", 00:24:53.205 "adrfam": "IPv4", 00:24:53.205 "traddr": "192.168.100.8", 00:24:53.205 "trsvcid": "4420" 00:24:53.205 }, 00:24:53.205 "peer_address": { 00:24:53.205 "trtype": "RDMA", 00:24:53.205 "adrfam": "IPv4", 00:24:53.205 "traddr": "192.168.100.8", 00:24:53.205 "trsvcid": "34194" 00:24:53.205 }, 00:24:53.205 "auth": { 00:24:53.205 "state": "completed", 00:24:53.205 "digest": "sha384", 00:24:53.205 "dhgroup": "ffdhe8192" 00:24:53.205 } 00:24:53.205 } 00:24:53.205 ]' 00:24:53.205 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:53.205 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:53.205 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:53.205 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:53.205 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:53.463 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:53.463 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:53.463 07:14:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:53.463 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:24:54.030 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:54.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:54.287 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:54.287 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.287 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.287 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.287 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:54.287 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:54.287 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.546 07:14:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.805 00:24:54.805 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:54.805 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:54.805 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:55.064 { 00:24:55.064 "cntlid": 91, 00:24:55.064 "qid": 0, 00:24:55.064 "state": "enabled", 00:24:55.064 "thread": "nvmf_tgt_poll_group_000", 00:24:55.064 "listen_address": { 00:24:55.064 "trtype": "RDMA", 00:24:55.064 "adrfam": "IPv4", 00:24:55.064 "traddr": "192.168.100.8", 00:24:55.064 "trsvcid": "4420" 00:24:55.064 }, 00:24:55.064 "peer_address": { 00:24:55.064 "trtype": "RDMA", 00:24:55.064 "adrfam": "IPv4", 00:24:55.064 "traddr": "192.168.100.8", 00:24:55.064 "trsvcid": "54149" 00:24:55.064 }, 00:24:55.064 "auth": { 00:24:55.064 "state": "completed", 00:24:55.064 "digest": "sha384", 00:24:55.064 "dhgroup": "ffdhe8192" 00:24:55.064 } 00:24:55.064 } 00:24:55.064 ]' 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:55.064 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:55.323 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:55.323 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:55.323 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:55.323 07:14:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:24:55.890 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:56.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:56.148 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:56.148 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.148 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.148 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.149 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:56.149 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:56.149 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.408 07:14:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.667 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:56.926 { 00:24:56.926 "cntlid": 93, 00:24:56.926 "qid": 0, 00:24:56.926 "state": "enabled", 00:24:56.926 "thread": "nvmf_tgt_poll_group_000", 00:24:56.926 "listen_address": { 00:24:56.926 "trtype": "RDMA", 00:24:56.926 "adrfam": "IPv4", 00:24:56.926 "traddr": "192.168.100.8", 00:24:56.926 "trsvcid": "4420" 00:24:56.926 }, 00:24:56.926 "peer_address": { 00:24:56.926 "trtype": "RDMA", 00:24:56.926 "adrfam": "IPv4", 00:24:56.926 "traddr": "192.168.100.8", 00:24:56.926 "trsvcid": "49490" 00:24:56.926 }, 00:24:56.926 "auth": { 00:24:56.926 "state": "completed", 00:24:56.926 "digest": "sha384", 00:24:56.926 "dhgroup": "ffdhe8192" 00:24:56.926 } 00:24:56.926 } 00:24:56.926 ]' 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:56.926 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:57.185 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:57.185 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:57.185 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:57.185 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:57.185 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:57.444 07:14:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:24:58.013 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:58.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:58.013 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:58.013 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.013 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.013 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.013 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:58.013 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:58.013 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:58.272 07:14:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:58.531 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:58.790 { 00:24:58.790 "cntlid": 95, 00:24:58.790 "qid": 0, 00:24:58.790 "state": "enabled", 00:24:58.790 "thread": "nvmf_tgt_poll_group_000", 00:24:58.790 "listen_address": { 00:24:58.790 "trtype": "RDMA", 00:24:58.790 "adrfam": "IPv4", 00:24:58.790 "traddr": "192.168.100.8", 00:24:58.790 "trsvcid": "4420" 00:24:58.790 }, 00:24:58.790 "peer_address": { 00:24:58.790 "trtype": "RDMA", 00:24:58.790 "adrfam": "IPv4", 00:24:58.790 "traddr": "192.168.100.8", 00:24:58.790 "trsvcid": "43259" 00:24:58.790 }, 00:24:58.790 "auth": { 00:24:58.790 "state": "completed", 00:24:58.790 "digest": "sha384", 00:24:58.790 "dhgroup": "ffdhe8192" 00:24:58.790 } 00:24:58.790 } 00:24:58.790 ]' 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:58.790 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:59.056 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:59.056 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:59.056 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:59.056 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:59.056 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:59.056 07:14:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:24:59.635 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:59.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:59.894 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.154 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:00.154 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:00.413 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.413 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:00.413 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.413 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.413 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.413 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:00.413 { 00:25:00.413 "cntlid": 97, 00:25:00.413 "qid": 0, 00:25:00.413 "state": "enabled", 00:25:00.413 "thread": "nvmf_tgt_poll_group_000", 00:25:00.413 "listen_address": { 00:25:00.413 "trtype": "RDMA", 00:25:00.413 "adrfam": "IPv4", 00:25:00.413 "traddr": "192.168.100.8", 00:25:00.413 "trsvcid": "4420" 00:25:00.413 }, 00:25:00.413 "peer_address": { 00:25:00.413 "trtype": "RDMA", 00:25:00.413 "adrfam": "IPv4", 00:25:00.413 "traddr": "192.168.100.8", 00:25:00.413 "trsvcid": "60408" 00:25:00.413 }, 00:25:00.413 "auth": { 00:25:00.413 "state": "completed", 00:25:00.413 "digest": "sha512", 00:25:00.413 "dhgroup": "null" 00:25:00.413 } 00:25:00.413 } 00:25:00.413 ]' 00:25:00.413 07:14:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:00.413 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:00.413 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:00.672 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:00.672 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:00.672 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:00.672 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:00.672 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:00.672 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:25:01.610 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:01.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:01.610 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:01.610 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.610 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.610 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.610 07:14:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.610 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.869 00:25:01.869 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:01.869 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:01.869 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:02.129 { 00:25:02.129 "cntlid": 99, 00:25:02.129 "qid": 0, 00:25:02.129 "state": "enabled", 00:25:02.129 "thread": "nvmf_tgt_poll_group_000", 00:25:02.129 "listen_address": { 00:25:02.129 "trtype": "RDMA", 00:25:02.129 "adrfam": "IPv4", 00:25:02.129 "traddr": "192.168.100.8", 00:25:02.129 "trsvcid": "4420" 00:25:02.129 }, 00:25:02.129 "peer_address": { 00:25:02.129 "trtype": "RDMA", 00:25:02.129 "adrfam": "IPv4", 00:25:02.129 "traddr": "192.168.100.8", 00:25:02.129 "trsvcid": "35336" 00:25:02.129 }, 00:25:02.129 "auth": { 00:25:02.129 "state": "completed", 00:25:02.129 "digest": "sha512", 00:25:02.129 "dhgroup": "null" 00:25:02.129 } 00:25:02.129 } 00:25:02.129 ]' 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:02.129 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:02.388 07:14:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:25:02.955 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:03.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.214 07:14:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.473 00:25:03.473 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:03.473 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:03.473 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:03.732 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.732 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:03.732 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.732 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.733 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.733 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:03.733 { 00:25:03.733 "cntlid": 101, 00:25:03.733 "qid": 0, 00:25:03.733 "state": "enabled", 00:25:03.733 "thread": "nvmf_tgt_poll_group_000", 00:25:03.733 "listen_address": { 00:25:03.733 "trtype": "RDMA", 00:25:03.733 "adrfam": "IPv4", 00:25:03.733 "traddr": "192.168.100.8", 00:25:03.733 "trsvcid": "4420" 00:25:03.733 }, 00:25:03.733 "peer_address": { 00:25:03.733 "trtype": "RDMA", 00:25:03.733 "adrfam": "IPv4", 00:25:03.733 "traddr": "192.168.100.8", 00:25:03.733 "trsvcid": "36801" 00:25:03.733 }, 00:25:03.733 "auth": { 00:25:03.733 "state": "completed", 00:25:03.733 "digest": "sha512", 00:25:03.733 "dhgroup": "null" 00:25:03.733 } 00:25:03.733 } 00:25:03.733 ]' 00:25:03.733 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:03.733 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:03.733 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:03.733 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:03.733 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:03.992 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:03.992 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:03.992 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:03.992 07:14:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:25:04.559 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:04.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:04.818 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:04.818 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.818 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.818 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.818 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:04.818 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:04.818 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.077 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.078 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.078 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:05.337 { 00:25:05.337 "cntlid": 103, 00:25:05.337 "qid": 0, 00:25:05.337 "state": "enabled", 00:25:05.337 "thread": "nvmf_tgt_poll_group_000", 00:25:05.337 "listen_address": { 00:25:05.337 "trtype": "RDMA", 00:25:05.337 "adrfam": "IPv4", 00:25:05.337 "traddr": "192.168.100.8", 00:25:05.337 "trsvcid": "4420" 00:25:05.337 }, 00:25:05.337 "peer_address": { 00:25:05.337 "trtype": "RDMA", 00:25:05.337 "adrfam": "IPv4", 00:25:05.337 "traddr": "192.168.100.8", 00:25:05.337 "trsvcid": "52180" 00:25:05.337 }, 00:25:05.337 "auth": { 00:25:05.337 "state": "completed", 00:25:05.337 "digest": "sha512", 00:25:05.337 "dhgroup": "null" 00:25:05.337 } 00:25:05.337 } 00:25:05.337 ]' 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:05.337 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:05.596 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:05.596 07:14:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:05.596 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:05.596 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:05.596 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:05.855 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:25:06.423 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:06.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:06.423 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:06.423 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.423 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.423 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.423 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.423 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:06.423 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:06.423 07:14:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:06.682 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.683 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.941 00:25:06.941 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:06.941 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:06.941 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:06.941 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.941 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:06.941 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.941 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.200 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.200 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:07.200 { 00:25:07.200 "cntlid": 105, 00:25:07.200 "qid": 0, 00:25:07.201 "state": "enabled", 00:25:07.201 "thread": "nvmf_tgt_poll_group_000", 00:25:07.201 "listen_address": { 00:25:07.201 "trtype": "RDMA", 00:25:07.201 "adrfam": "IPv4", 00:25:07.201 "traddr": "192.168.100.8", 00:25:07.201 "trsvcid": "4420" 00:25:07.201 }, 00:25:07.201 "peer_address": { 00:25:07.201 "trtype": "RDMA", 00:25:07.201 "adrfam": "IPv4", 00:25:07.201 "traddr": "192.168.100.8", 00:25:07.201 "trsvcid": "47944" 00:25:07.201 }, 00:25:07.201 "auth": { 00:25:07.201 "state": "completed", 00:25:07.201 "digest": "sha512", 00:25:07.201 "dhgroup": "ffdhe2048" 00:25:07.201 } 00:25:07.201 } 00:25:07.201 ]' 00:25:07.201 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:07.201 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:07.201 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:07.201 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:07.201 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:07.201 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:07.201 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:07.201 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:07.460 07:14:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:25:08.028 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:08.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:08.028 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:08.028 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.028 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.028 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.028 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:08.028 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.028 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.287 07:14:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.547 00:25:08.547 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:08.547 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:08.547 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:08.806 { 00:25:08.806 "cntlid": 107, 00:25:08.806 "qid": 0, 00:25:08.806 "state": "enabled", 00:25:08.806 "thread": "nvmf_tgt_poll_group_000", 00:25:08.806 "listen_address": { 00:25:08.806 "trtype": "RDMA", 00:25:08.806 "adrfam": "IPv4", 00:25:08.806 "traddr": "192.168.100.8", 00:25:08.806 "trsvcid": "4420" 00:25:08.806 }, 00:25:08.806 "peer_address": { 00:25:08.806 "trtype": "RDMA", 00:25:08.806 "adrfam": "IPv4", 00:25:08.806 "traddr": "192.168.100.8", 00:25:08.806 "trsvcid": "41556" 00:25:08.806 }, 00:25:08.806 "auth": { 00:25:08.806 "state": "completed", 00:25:08.806 "digest": "sha512", 00:25:08.806 "dhgroup": "ffdhe2048" 00:25:08.806 } 00:25:08.806 } 00:25:08.806 ]' 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:08.806 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:09.066 07:14:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:25:09.632 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:09.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:09.890 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.148 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.148 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.148 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.148 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.148 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.148 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:10.407 { 00:25:10.407 "cntlid": 109, 00:25:10.407 "qid": 0, 00:25:10.407 "state": "enabled", 00:25:10.407 "thread": "nvmf_tgt_poll_group_000", 00:25:10.407 "listen_address": { 00:25:10.407 "trtype": "RDMA", 00:25:10.407 "adrfam": "IPv4", 00:25:10.407 "traddr": "192.168.100.8", 00:25:10.407 "trsvcid": "4420" 00:25:10.407 }, 00:25:10.407 "peer_address": { 00:25:10.407 "trtype": "RDMA", 00:25:10.407 "adrfam": "IPv4", 00:25:10.407 "traddr": "192.168.100.8", 00:25:10.407 "trsvcid": "40554" 00:25:10.407 }, 00:25:10.407 "auth": { 00:25:10.407 "state": "completed", 00:25:10.407 "digest": "sha512", 00:25:10.407 "dhgroup": "ffdhe2048" 00:25:10.407 } 00:25:10.407 } 00:25:10.407 ]' 00:25:10.407 07:14:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:10.407 07:14:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:10.407 07:14:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:10.665 07:14:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:10.665 07:14:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:10.665 07:14:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:10.665 07:14:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:10.665 07:14:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:10.923 07:14:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:25:11.490 07:14:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:11.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:11.490 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:11.490 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.490 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.490 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.490 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:11.490 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.490 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:11.748 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:12.009 00:25:12.009 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:12.009 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:12.009 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:12.009 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.009 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:12.009 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.009 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:12.316 { 00:25:12.316 "cntlid": 111, 00:25:12.316 "qid": 0, 00:25:12.316 "state": "enabled", 00:25:12.316 "thread": "nvmf_tgt_poll_group_000", 00:25:12.316 "listen_address": { 00:25:12.316 "trtype": "RDMA", 00:25:12.316 "adrfam": "IPv4", 00:25:12.316 "traddr": "192.168.100.8", 00:25:12.316 "trsvcid": "4420" 00:25:12.316 }, 00:25:12.316 "peer_address": { 00:25:12.316 "trtype": "RDMA", 00:25:12.316 "adrfam": "IPv4", 00:25:12.316 "traddr": "192.168.100.8", 00:25:12.316 "trsvcid": "58513" 00:25:12.316 }, 00:25:12.316 "auth": { 00:25:12.316 "state": "completed", 00:25:12.316 "digest": "sha512", 00:25:12.316 "dhgroup": "ffdhe2048" 00:25:12.316 } 00:25:12.316 } 00:25:12.316 ]' 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:12.316 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:12.575 07:14:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:25:13.142 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:13.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:13.142 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:13.142 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.142 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.142 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.142 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.142 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:13.142 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:13.142 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.401 07:14:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.659 00:25:13.659 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:13.659 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:13.659 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:13.917 { 00:25:13.917 "cntlid": 113, 00:25:13.917 "qid": 0, 00:25:13.917 "state": "enabled", 00:25:13.917 "thread": "nvmf_tgt_poll_group_000", 00:25:13.917 "listen_address": { 00:25:13.917 "trtype": "RDMA", 00:25:13.917 "adrfam": "IPv4", 00:25:13.917 "traddr": "192.168.100.8", 00:25:13.917 "trsvcid": "4420" 00:25:13.917 }, 00:25:13.917 "peer_address": { 00:25:13.917 "trtype": "RDMA", 00:25:13.917 "adrfam": "IPv4", 00:25:13.917 "traddr": "192.168.100.8", 00:25:13.917 "trsvcid": "48974" 00:25:13.917 }, 00:25:13.917 "auth": { 00:25:13.917 "state": "completed", 00:25:13.917 "digest": "sha512", 00:25:13.917 "dhgroup": "ffdhe3072" 00:25:13.917 } 00:25:13.917 } 00:25:13.917 ]' 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:13.917 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:14.175 07:14:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:25:14.743 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:15.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.002 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.261 00:25:15.261 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:15.261 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:15.261 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:15.520 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.520 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:15.520 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.520 07:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:15.520 { 00:25:15.520 "cntlid": 115, 00:25:15.520 "qid": 0, 00:25:15.520 "state": "enabled", 00:25:15.520 "thread": "nvmf_tgt_poll_group_000", 00:25:15.520 "listen_address": { 00:25:15.520 "trtype": "RDMA", 00:25:15.520 "adrfam": "IPv4", 00:25:15.520 "traddr": "192.168.100.8", 00:25:15.520 "trsvcid": "4420" 00:25:15.520 }, 00:25:15.520 "peer_address": { 00:25:15.520 "trtype": "RDMA", 00:25:15.520 "adrfam": "IPv4", 00:25:15.520 "traddr": "192.168.100.8", 00:25:15.520 "trsvcid": "41361" 00:25:15.520 }, 00:25:15.520 "auth": { 00:25:15.520 "state": "completed", 00:25:15.520 "digest": "sha512", 00:25:15.520 "dhgroup": "ffdhe3072" 00:25:15.520 } 00:25:15.520 } 00:25:15.520 ]' 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:15.520 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:15.779 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:25:16.347 07:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:16.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:16.606 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:16.606 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.606 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.606 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.606 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:16.606 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.606 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.865 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.865 00:25:17.124 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:17.124 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:17.124 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:17.124 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.124 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:17.124 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.124 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.125 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.125 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:17.125 { 00:25:17.125 "cntlid": 117, 00:25:17.125 "qid": 0, 00:25:17.125 "state": "enabled", 00:25:17.125 "thread": "nvmf_tgt_poll_group_000", 00:25:17.125 "listen_address": { 00:25:17.125 "trtype": "RDMA", 00:25:17.125 "adrfam": "IPv4", 00:25:17.125 "traddr": "192.168.100.8", 00:25:17.125 "trsvcid": "4420" 00:25:17.125 }, 00:25:17.125 "peer_address": { 00:25:17.125 "trtype": "RDMA", 00:25:17.125 "adrfam": "IPv4", 00:25:17.125 "traddr": "192.168.100.8", 00:25:17.125 "trsvcid": "39393" 00:25:17.125 }, 00:25:17.125 "auth": { 00:25:17.125 "state": "completed", 00:25:17.125 "digest": "sha512", 00:25:17.125 "dhgroup": "ffdhe3072" 00:25:17.125 } 00:25:17.125 } 00:25:17.125 ]' 00:25:17.125 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:17.125 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:17.125 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:17.383 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:17.383 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:17.383 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:17.384 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:17.384 07:14:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:17.642 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:25:18.211 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:18.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:18.211 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:18.211 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.211 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.211 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.211 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:18.211 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:18.211 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:18.470 07:14:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:18.729 00:25:18.729 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:18.729 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:18.729 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:18.988 { 00:25:18.988 "cntlid": 119, 00:25:18.988 "qid": 0, 00:25:18.988 "state": "enabled", 00:25:18.988 "thread": "nvmf_tgt_poll_group_000", 00:25:18.988 "listen_address": { 00:25:18.988 "trtype": "RDMA", 00:25:18.988 "adrfam": "IPv4", 00:25:18.988 "traddr": "192.168.100.8", 00:25:18.988 "trsvcid": "4420" 00:25:18.988 }, 00:25:18.988 "peer_address": { 00:25:18.988 "trtype": "RDMA", 00:25:18.988 "adrfam": "IPv4", 00:25:18.988 "traddr": "192.168.100.8", 00:25:18.988 "trsvcid": "48737" 00:25:18.988 }, 00:25:18.988 "auth": { 00:25:18.988 "state": "completed", 00:25:18.988 "digest": "sha512", 00:25:18.988 "dhgroup": "ffdhe3072" 00:25:18.988 } 00:25:18.988 } 00:25:18.988 ]' 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:18.988 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:19.245 07:14:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:25:19.812 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:19.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:19.812 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:19.812 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.812 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.071 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.330 00:25:20.330 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:20.330 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:20.330 07:14:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:20.589 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:20.590 { 00:25:20.590 "cntlid": 121, 00:25:20.590 "qid": 0, 00:25:20.590 "state": "enabled", 00:25:20.590 "thread": "nvmf_tgt_poll_group_000", 00:25:20.590 "listen_address": { 00:25:20.590 "trtype": "RDMA", 00:25:20.590 "adrfam": "IPv4", 00:25:20.590 "traddr": "192.168.100.8", 00:25:20.590 "trsvcid": "4420" 00:25:20.590 }, 00:25:20.590 "peer_address": { 00:25:20.590 "trtype": "RDMA", 00:25:20.590 "adrfam": "IPv4", 00:25:20.590 "traddr": "192.168.100.8", 00:25:20.590 "trsvcid": "37404" 00:25:20.590 }, 00:25:20.590 "auth": { 00:25:20.590 "state": "completed", 00:25:20.590 "digest": "sha512", 00:25:20.590 "dhgroup": "ffdhe4096" 00:25:20.590 } 00:25:20.590 } 00:25:20.590 ]' 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:20.590 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:20.848 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:20.848 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:20.848 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:20.848 07:14:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:25:21.416 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:21.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:21.675 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:21.675 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.675 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.675 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.675 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:21.675 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:21.675 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.934 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.194 00:25:22.194 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:22.194 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:22.194 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:22.194 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.194 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:22.194 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.194 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:22.453 { 00:25:22.453 "cntlid": 123, 00:25:22.453 "qid": 0, 00:25:22.453 "state": "enabled", 00:25:22.453 "thread": "nvmf_tgt_poll_group_000", 00:25:22.453 "listen_address": { 00:25:22.453 "trtype": "RDMA", 00:25:22.453 "adrfam": "IPv4", 00:25:22.453 "traddr": "192.168.100.8", 00:25:22.453 "trsvcid": "4420" 00:25:22.453 }, 00:25:22.453 "peer_address": { 00:25:22.453 "trtype": "RDMA", 00:25:22.453 "adrfam": "IPv4", 00:25:22.453 "traddr": "192.168.100.8", 00:25:22.453 "trsvcid": "39862" 00:25:22.453 }, 00:25:22.453 "auth": { 00:25:22.453 "state": "completed", 00:25:22.453 "digest": "sha512", 00:25:22.453 "dhgroup": "ffdhe4096" 00:25:22.453 } 00:25:22.453 } 00:25:22.453 ]' 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:22.453 07:14:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:22.712 07:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:25:23.280 07:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:23.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:23.280 07:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:23.280 07:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.280 07:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.280 07:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.280 07:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:23.280 07:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:23.280 07:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.540 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.799 00:25:23.799 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:23.799 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:23.799 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:24.070 { 00:25:24.070 "cntlid": 125, 00:25:24.070 "qid": 0, 00:25:24.070 "state": "enabled", 00:25:24.070 "thread": "nvmf_tgt_poll_group_000", 00:25:24.070 "listen_address": { 00:25:24.070 "trtype": "RDMA", 00:25:24.070 "adrfam": "IPv4", 00:25:24.070 "traddr": "192.168.100.8", 00:25:24.070 "trsvcid": "4420" 00:25:24.070 }, 00:25:24.070 "peer_address": { 00:25:24.070 "trtype": "RDMA", 00:25:24.070 "adrfam": "IPv4", 00:25:24.070 "traddr": "192.168.100.8", 00:25:24.070 "trsvcid": "42271" 00:25:24.070 }, 00:25:24.070 "auth": { 00:25:24.070 "state": "completed", 00:25:24.070 "digest": "sha512", 00:25:24.070 "dhgroup": "ffdhe4096" 00:25:24.070 } 00:25:24.070 } 00:25:24.070 ]' 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:24.070 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:24.329 07:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:25:24.897 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:25.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:25.192 07:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:25.452 00:25:25.452 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:25.452 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:25.452 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:25.711 { 00:25:25.711 "cntlid": 127, 00:25:25.711 "qid": 0, 00:25:25.711 "state": "enabled", 00:25:25.711 "thread": "nvmf_tgt_poll_group_000", 00:25:25.711 "listen_address": { 00:25:25.711 "trtype": "RDMA", 00:25:25.711 "adrfam": "IPv4", 00:25:25.711 "traddr": "192.168.100.8", 00:25:25.711 "trsvcid": "4420" 00:25:25.711 }, 00:25:25.711 "peer_address": { 00:25:25.711 "trtype": "RDMA", 00:25:25.711 "adrfam": "IPv4", 00:25:25.711 "traddr": "192.168.100.8", 00:25:25.711 "trsvcid": "59263" 00:25:25.711 }, 00:25:25.711 "auth": { 00:25:25.711 "state": "completed", 00:25:25.711 "digest": "sha512", 00:25:25.711 "dhgroup": "ffdhe4096" 00:25:25.711 } 00:25:25.711 } 00:25:25.711 ]' 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:25.711 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:25.970 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:25.970 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:25.970 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:25.970 07:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:25:26.537 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:26.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:26.796 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:26.796 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.796 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.796 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.796 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.796 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:26.796 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:26.796 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.054 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.313 00:25:27.313 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:27.313 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:27.313 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:27.572 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.572 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:27.572 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.572 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.572 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.572 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:27.572 { 00:25:27.572 "cntlid": 129, 00:25:27.572 "qid": 0, 00:25:27.572 "state": "enabled", 00:25:27.572 "thread": "nvmf_tgt_poll_group_000", 00:25:27.572 "listen_address": { 00:25:27.572 "trtype": "RDMA", 00:25:27.572 "adrfam": "IPv4", 00:25:27.572 "traddr": "192.168.100.8", 00:25:27.572 "trsvcid": "4420" 00:25:27.572 }, 00:25:27.572 "peer_address": { 00:25:27.572 "trtype": "RDMA", 00:25:27.572 "adrfam": "IPv4", 00:25:27.572 "traddr": "192.168.100.8", 00:25:27.572 "trsvcid": "52016" 00:25:27.572 }, 00:25:27.572 "auth": { 00:25:27.572 "state": "completed", 00:25:27.572 "digest": "sha512", 00:25:27.572 "dhgroup": "ffdhe6144" 00:25:27.572 } 00:25:27.572 } 00:25:27.572 ]' 00:25:27.572 07:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:27.572 07:14:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:27.572 07:14:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:27.572 07:14:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:27.572 07:14:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:27.572 07:14:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:27.572 07:14:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:27.572 07:14:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:27.830 07:14:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:25:28.396 07:14:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:28.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:28.396 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:28.396 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.396 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.396 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.396 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:28.396 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:28.396 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.654 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.912 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:29.170 { 00:25:29.170 "cntlid": 131, 00:25:29.170 "qid": 0, 00:25:29.170 "state": "enabled", 00:25:29.170 "thread": "nvmf_tgt_poll_group_000", 00:25:29.170 "listen_address": { 00:25:29.170 "trtype": "RDMA", 00:25:29.170 "adrfam": "IPv4", 00:25:29.170 "traddr": "192.168.100.8", 00:25:29.170 "trsvcid": "4420" 00:25:29.170 }, 00:25:29.170 "peer_address": { 00:25:29.170 "trtype": "RDMA", 00:25:29.170 "adrfam": "IPv4", 00:25:29.170 "traddr": "192.168.100.8", 00:25:29.170 "trsvcid": "41911" 00:25:29.170 }, 00:25:29.170 "auth": { 00:25:29.170 "state": "completed", 00:25:29.170 "digest": "sha512", 00:25:29.170 "dhgroup": "ffdhe6144" 00:25:29.170 } 00:25:29.170 } 00:25:29.170 ]' 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:29.170 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:29.427 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:29.427 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:29.427 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:29.427 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:29.427 07:14:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:29.427 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:30.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.361 07:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.929 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:30.929 { 00:25:30.929 "cntlid": 133, 00:25:30.929 "qid": 0, 00:25:30.929 "state": "enabled", 00:25:30.929 "thread": "nvmf_tgt_poll_group_000", 00:25:30.929 "listen_address": { 00:25:30.929 "trtype": "RDMA", 00:25:30.929 "adrfam": "IPv4", 00:25:30.929 "traddr": "192.168.100.8", 00:25:30.929 "trsvcid": "4420" 00:25:30.929 }, 00:25:30.929 "peer_address": { 00:25:30.929 "trtype": "RDMA", 00:25:30.929 "adrfam": "IPv4", 00:25:30.929 "traddr": "192.168.100.8", 00:25:30.929 "trsvcid": "39514" 00:25:30.929 }, 00:25:30.929 "auth": { 00:25:30.929 "state": "completed", 00:25:30.929 "digest": "sha512", 00:25:30.929 "dhgroup": "ffdhe6144" 00:25:30.929 } 00:25:30.929 } 00:25:30.929 ]' 00:25:30.929 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:31.188 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:31.188 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:31.188 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:31.188 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:31.189 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:31.189 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:31.189 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:31.447 07:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:25:32.014 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:32.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:32.014 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:32.014 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.014 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.014 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.014 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:32.014 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:32.014 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:32.272 07:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:32.530 00:25:32.530 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:32.530 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:32.530 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:32.789 { 00:25:32.789 "cntlid": 135, 00:25:32.789 "qid": 0, 00:25:32.789 "state": "enabled", 00:25:32.789 "thread": "nvmf_tgt_poll_group_000", 00:25:32.789 "listen_address": { 00:25:32.789 "trtype": "RDMA", 00:25:32.789 "adrfam": "IPv4", 00:25:32.789 "traddr": "192.168.100.8", 00:25:32.789 "trsvcid": "4420" 00:25:32.789 }, 00:25:32.789 "peer_address": { 00:25:32.789 "trtype": "RDMA", 00:25:32.789 "adrfam": "IPv4", 00:25:32.789 "traddr": "192.168.100.8", 00:25:32.789 "trsvcid": "53218" 00:25:32.789 }, 00:25:32.789 "auth": { 00:25:32.789 "state": "completed", 00:25:32.789 "digest": "sha512", 00:25:32.789 "dhgroup": "ffdhe6144" 00:25:32.789 } 00:25:32.789 } 00:25:32.789 ]' 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:32.789 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:33.047 07:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:25:33.614 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:33.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:33.872 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:33.872 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.872 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.872 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.872 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.872 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:33.872 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:33.872 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.131 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.389 00:25:34.389 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:34.389 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:34.389 07:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:34.648 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.648 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:34.648 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.648 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:34.648 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.648 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:34.648 { 00:25:34.648 "cntlid": 137, 00:25:34.648 "qid": 0, 00:25:34.648 "state": "enabled", 00:25:34.648 "thread": "nvmf_tgt_poll_group_000", 00:25:34.648 "listen_address": { 00:25:34.648 "trtype": "RDMA", 00:25:34.648 "adrfam": "IPv4", 00:25:34.648 "traddr": "192.168.100.8", 00:25:34.648 "trsvcid": "4420" 00:25:34.648 }, 00:25:34.648 "peer_address": { 00:25:34.648 "trtype": "RDMA", 00:25:34.648 "adrfam": "IPv4", 00:25:34.648 "traddr": "192.168.100.8", 00:25:34.648 "trsvcid": "59359" 00:25:34.648 }, 00:25:34.648 "auth": { 00:25:34.648 "state": "completed", 00:25:34.648 "digest": "sha512", 00:25:34.648 "dhgroup": "ffdhe8192" 00:25:34.648 } 00:25:34.648 } 00:25:34.648 ]' 00:25:34.648 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:34.648 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:34.648 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:34.906 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:34.906 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:34.906 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:34.906 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:34.906 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:34.906 07:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:35.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.842 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.410 00:25:36.410 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:36.410 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:36.410 07:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:36.668 { 00:25:36.668 "cntlid": 139, 00:25:36.668 "qid": 0, 00:25:36.668 "state": "enabled", 00:25:36.668 "thread": "nvmf_tgt_poll_group_000", 00:25:36.668 "listen_address": { 00:25:36.668 "trtype": "RDMA", 00:25:36.668 "adrfam": "IPv4", 00:25:36.668 "traddr": "192.168.100.8", 00:25:36.668 "trsvcid": "4420" 00:25:36.668 }, 00:25:36.668 "peer_address": { 00:25:36.668 "trtype": "RDMA", 00:25:36.668 "adrfam": "IPv4", 00:25:36.668 "traddr": "192.168.100.8", 00:25:36.668 "trsvcid": "37685" 00:25:36.668 }, 00:25:36.668 "auth": { 00:25:36.668 "state": "completed", 00:25:36.668 "digest": "sha512", 00:25:36.668 "dhgroup": "ffdhe8192" 00:25:36.668 } 00:25:36.668 } 00:25:36.668 ]' 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:36.668 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:36.926 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OTAxMGE0NzI5NDZjODU3YzQ1ZmM5ZjRkN2Q4ZmZhMTVijb+l: --dhchap-ctrl-secret DHHC-1:02:YTIxM2I2NDc1YzRiMjI1ZWY5OTI5NjU0NDVlMmM0MTQ5OTI3MTE3MWIxODA0NTgyms9fpQ==: 00:25:37.493 07:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:37.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:37.493 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:37.493 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.493 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.752 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.319 00:25:38.319 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:38.319 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:38.319 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:38.637 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.637 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:38.637 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.637 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.637 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.637 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:38.637 { 00:25:38.637 "cntlid": 141, 00:25:38.637 "qid": 0, 00:25:38.637 "state": "enabled", 00:25:38.637 "thread": "nvmf_tgt_poll_group_000", 00:25:38.637 "listen_address": { 00:25:38.637 "trtype": "RDMA", 00:25:38.637 "adrfam": "IPv4", 00:25:38.637 "traddr": "192.168.100.8", 00:25:38.637 "trsvcid": "4420" 00:25:38.637 }, 00:25:38.637 "peer_address": { 00:25:38.637 "trtype": "RDMA", 00:25:38.637 "adrfam": "IPv4", 00:25:38.637 "traddr": "192.168.100.8", 00:25:38.637 "trsvcid": "55595" 00:25:38.637 }, 00:25:38.637 "auth": { 00:25:38.637 "state": "completed", 00:25:38.637 "digest": "sha512", 00:25:38.637 "dhgroup": "ffdhe8192" 00:25:38.637 } 00:25:38.637 } 00:25:38.637 ]' 00:25:38.637 07:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:38.637 07:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:38.637 07:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:38.637 07:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:38.637 07:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:38.637 07:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:38.637 07:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:38.637 07:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:38.895 07:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:N2RhZGViYmNjMmRkZjczZTA0OGI2YTM1MzkzYWFmYTQ2YTYwMDVjMzgwZjAyODlkEkIl8w==: --dhchap-ctrl-secret DHHC-1:01:YmJhMzE4ODk3OWJkMDQ0Zjg5N2ZiMzZjOWYyYTJmNWN7FiE7: 00:25:39.461 07:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:39.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:39.461 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:39.461 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.461 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.461 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.461 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:39.461 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.461 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:39.720 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:40.288 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:40.288 { 00:25:40.288 "cntlid": 143, 00:25:40.288 "qid": 0, 00:25:40.288 "state": "enabled", 00:25:40.288 "thread": "nvmf_tgt_poll_group_000", 00:25:40.288 "listen_address": { 00:25:40.288 "trtype": "RDMA", 00:25:40.288 "adrfam": "IPv4", 00:25:40.288 "traddr": "192.168.100.8", 00:25:40.288 "trsvcid": "4420" 00:25:40.288 }, 00:25:40.288 "peer_address": { 00:25:40.288 "trtype": "RDMA", 00:25:40.288 "adrfam": "IPv4", 00:25:40.288 "traddr": "192.168.100.8", 00:25:40.288 "trsvcid": "32998" 00:25:40.288 }, 00:25:40.288 "auth": { 00:25:40.288 "state": "completed", 00:25:40.288 "digest": "sha512", 00:25:40.288 "dhgroup": "ffdhe8192" 00:25:40.288 } 00:25:40.288 } 00:25:40.288 ]' 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:40.288 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:40.548 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:40.548 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:40.548 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:40.548 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:40.548 07:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:40.870 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:25:41.129 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:41.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:41.388 07:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.647 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.906 00:25:41.906 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:41.906 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:41.906 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:42.166 { 00:25:42.166 "cntlid": 145, 00:25:42.166 "qid": 0, 00:25:42.166 "state": "enabled", 00:25:42.166 "thread": "nvmf_tgt_poll_group_000", 00:25:42.166 "listen_address": { 00:25:42.166 "trtype": "RDMA", 00:25:42.166 "adrfam": "IPv4", 00:25:42.166 "traddr": "192.168.100.8", 00:25:42.166 "trsvcid": "4420" 00:25:42.166 }, 00:25:42.166 "peer_address": { 00:25:42.166 "trtype": "RDMA", 00:25:42.166 "adrfam": "IPv4", 00:25:42.166 "traddr": "192.168.100.8", 00:25:42.166 "trsvcid": "55269" 00:25:42.166 }, 00:25:42.166 "auth": { 00:25:42.166 "state": "completed", 00:25:42.166 "digest": "sha512", 00:25:42.166 "dhgroup": "ffdhe8192" 00:25:42.166 } 00:25:42.166 } 00:25:42.166 ]' 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:42.166 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:42.425 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:42.425 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:42.425 07:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:42.425 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDg1ZDk5MjU3NjY3OTQ2ODc0ZDM0YWM1NGI0ZjdkM2U4NjRmNjRmZDQwZTY1YWMyAtInbA==: --dhchap-ctrl-secret DHHC-1:03:NjlkYTIzYjg2ZGFlY2MzYmUyMzBiYTQ2ZjcwMDAzZmM0MGQ3OGQxMTYwOWMzMWNjOTU5OTE5ZWFkYTJmN2M1MF7Oh18=: 00:25:42.993 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:43.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:43.252 07:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:15.337 request: 00:26:15.337 { 00:26:15.337 "name": "nvme0", 00:26:15.337 "trtype": "rdma", 00:26:15.337 "traddr": "192.168.100.8", 00:26:15.337 "adrfam": "ipv4", 00:26:15.337 "trsvcid": "4420", 00:26:15.337 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:15.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:26:15.337 "prchk_reftag": false, 00:26:15.337 "prchk_guard": false, 00:26:15.337 "hdgst": false, 00:26:15.337 "ddgst": false, 00:26:15.337 "dhchap_key": "key2", 00:26:15.337 "method": "bdev_nvme_attach_controller", 00:26:15.337 "req_id": 1 00:26:15.337 } 00:26:15.337 Got JSON-RPC error response 00:26:15.337 response: 00:26:15.337 { 00:26:15.337 "code": -5, 00:26:15.337 "message": "Input/output error" 00:26:15.337 } 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:15.337 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:15.337 request: 00:26:15.337 { 00:26:15.337 "name": "nvme0", 00:26:15.337 "trtype": "rdma", 00:26:15.337 "traddr": "192.168.100.8", 00:26:15.337 "adrfam": "ipv4", 00:26:15.337 "trsvcid": "4420", 00:26:15.337 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:15.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:26:15.337 "prchk_reftag": false, 00:26:15.337 "prchk_guard": false, 00:26:15.337 "hdgst": false, 00:26:15.337 "ddgst": false, 00:26:15.337 "dhchap_key": "key1", 00:26:15.338 "dhchap_ctrlr_key": "ckey2", 00:26:15.338 "method": "bdev_nvme_attach_controller", 00:26:15.338 "req_id": 1 00:26:15.338 } 00:26:15.338 Got JSON-RPC error response 00:26:15.338 response: 00:26:15.338 { 00:26:15.338 "code": -5, 00:26:15.338 "message": "Input/output error" 00:26:15.338 } 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.338 07:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.431 request: 00:26:47.431 { 00:26:47.431 "name": "nvme0", 00:26:47.431 "trtype": "rdma", 00:26:47.431 "traddr": "192.168.100.8", 00:26:47.431 "adrfam": "ipv4", 00:26:47.431 "trsvcid": "4420", 00:26:47.431 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:47.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:26:47.431 "prchk_reftag": false, 00:26:47.431 "prchk_guard": false, 00:26:47.431 "hdgst": false, 00:26:47.431 "ddgst": false, 00:26:47.431 "dhchap_key": "key1", 00:26:47.431 "dhchap_ctrlr_key": "ckey1", 00:26:47.431 "method": "bdev_nvme_attach_controller", 00:26:47.431 "req_id": 1 00:26:47.431 } 00:26:47.431 Got JSON-RPC error response 00:26:47.431 response: 00:26:47.431 { 00:26:47.431 "code": -5, 00:26:47.431 "message": "Input/output error" 00:26:47.431 } 00:26:47.431 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:26:47.431 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1701359 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1701359 ']' 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1701359 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1701359 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1701359' 00:26:47.432 killing process with pid 1701359 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1701359 00:26:47.432 07:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1701359 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1734656 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1734656 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1734656 ']' 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.432 07:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1734656 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1734656 ']' 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.432 07:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:48.001 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:48.569 00:26:48.569 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:48.569 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:48.569 07:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:48.569 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.569 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:48.569 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.569 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:48.569 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.569 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:48.569 { 00:26:48.569 "cntlid": 1, 00:26:48.569 "qid": 0, 00:26:48.569 "state": "enabled", 00:26:48.569 "thread": "nvmf_tgt_poll_group_000", 00:26:48.569 "listen_address": { 00:26:48.569 "trtype": "RDMA", 00:26:48.569 "adrfam": "IPv4", 00:26:48.569 "traddr": "192.168.100.8", 00:26:48.569 "trsvcid": "4420" 00:26:48.569 }, 00:26:48.569 "peer_address": { 00:26:48.569 "trtype": "RDMA", 00:26:48.569 "adrfam": "IPv4", 00:26:48.569 "traddr": "192.168.100.8", 00:26:48.569 "trsvcid": "56895" 00:26:48.569 }, 00:26:48.569 "auth": { 00:26:48.569 "state": "completed", 00:26:48.569 "digest": "sha512", 00:26:48.569 "dhgroup": "ffdhe8192" 00:26:48.569 } 00:26:48.569 } 00:26:48.569 ]' 00:26:48.569 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:48.828 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:48.828 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:48.828 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:48.828 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:48.828 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:48.828 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:48.828 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:49.087 07:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ZmQzMGE0MzcxYTM2OTNiODQ4MTVlYmY1N2QzOGZlMDJlMzEyYTE3MjNlYjc3MWY4OWUyYjUzMGU2MjA0ZDdlM4COFM4=: 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:49.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:26:49.654 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:26:49.913 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:49.913 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:26:49.913 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:49.913 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:26:49.913 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:49.913 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:26:49.913 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:49.913 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:49.914 07:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:21.997 request: 00:27:21.997 { 00:27:21.997 "name": "nvme0", 00:27:21.997 "trtype": "rdma", 00:27:21.997 "traddr": "192.168.100.8", 00:27:21.997 "adrfam": "ipv4", 00:27:21.997 "trsvcid": "4420", 00:27:21.997 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:21.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:27:21.997 "prchk_reftag": false, 00:27:21.997 "prchk_guard": false, 00:27:21.997 "hdgst": false, 00:27:21.997 "ddgst": false, 00:27:21.997 "dhchap_key": "key3", 00:27:21.997 "method": "bdev_nvme_attach_controller", 00:27:21.997 "req_id": 1 00:27:21.997 } 00:27:21.997 Got JSON-RPC error response 00:27:21.997 response: 00:27:21.997 { 00:27:21.997 "code": -5, 00:27:21.997 "message": "Input/output error" 00:27:21.997 } 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:21.997 07:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:54.175 request: 00:27:54.175 { 00:27:54.175 "name": "nvme0", 00:27:54.175 "trtype": "rdma", 00:27:54.175 "traddr": "192.168.100.8", 00:27:54.175 "adrfam": "ipv4", 00:27:54.175 "trsvcid": "4420", 00:27:54.175 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:54.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:27:54.175 "prchk_reftag": false, 00:27:54.175 "prchk_guard": false, 00:27:54.175 "hdgst": false, 00:27:54.175 "ddgst": false, 00:27:54.175 "dhchap_key": "key3", 00:27:54.175 "method": "bdev_nvme_attach_controller", 00:27:54.175 "req_id": 1 00:27:54.175 } 00:27:54.175 Got JSON-RPC error response 00:27:54.175 response: 00:27:54.175 { 00:27:54.175 "code": -5, 00:27:54.175 "message": "Input/output error" 00:27:54.175 } 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:54.175 request: 00:27:54.175 { 00:27:54.175 "name": "nvme0", 00:27:54.175 "trtype": "rdma", 00:27:54.175 "traddr": "192.168.100.8", 00:27:54.175 "adrfam": "ipv4", 00:27:54.175 "trsvcid": "4420", 00:27:54.175 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:54.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:27:54.175 "prchk_reftag": false, 00:27:54.175 "prchk_guard": false, 00:27:54.175 "hdgst": false, 00:27:54.175 "ddgst": false, 00:27:54.175 "dhchap_key": "key0", 00:27:54.175 "dhchap_ctrlr_key": "key1", 00:27:54.175 "method": "bdev_nvme_attach_controller", 00:27:54.175 "req_id": 1 00:27:54.175 } 00:27:54.175 Got JSON-RPC error response 00:27:54.175 response: 00:27:54.175 { 00:27:54.175 "code": -5, 00:27:54.175 "message": "Input/output error" 00:27:54.175 } 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:27:54.175 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:27:54.175 07:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1701491 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1701491 ']' 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1701491 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1701491 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:54.175 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:54.176 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1701491' 00:27:54.176 killing process with pid 1701491 00:27:54.176 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1701491 00:27:54.176 07:17:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1701491 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:54.176 rmmod nvme_rdma 00:27:54.176 rmmod nvme_fabrics 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1734656 ']' 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1734656 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1734656 ']' 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1734656 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1734656 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1734656' 00:27:54.176 killing process with pid 1734656 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1734656 00:27:54.176 07:17:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1734656 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Hjn /tmp/spdk.key-sha256.DyX /tmp/spdk.key-sha384.fpy /tmp/spdk.key-sha512.MBM /tmp/spdk.key-sha512.kZq /tmp/spdk.key-sha384.beG /tmp/spdk.key-sha256.BSO '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:27:55.555 00:27:55.555 real 4m30.107s 00:27:55.555 user 9m32.988s 00:27:55.555 sys 0m25.365s 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.555 ************************************ 00:27:55.555 END TEST nvmf_auth_target 00:27:55.555 ************************************ 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:55.555 ************************************ 00:27:55.555 START TEST nvmf_fuzz 00:27:55.555 ************************************ 00:27:55.555 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:27:55.815 * Looking for test storage... 00:27:55.815 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.815 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.816 07:17:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:03.938 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:03.938 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:03.938 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:03.938 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # rdma_device_init 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # uname 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:03.938 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:03.939 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.939 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:03.939 altname enp217s0f0np0 00:28:03.939 altname ens818f0np0 00:28:03.939 inet 192.168.100.8/24 scope global mlx_0_0 00:28:03.939 valid_lft forever preferred_lft forever 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:03.939 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.939 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:03.939 altname enp217s0f1np1 00:28:03.939 altname ens818f1np1 00:28:03.939 inet 192.168.100.9/24 scope global mlx_0_1 00:28:03.939 valid_lft forever preferred_lft forever 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:03.939 192.168.100.9' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # head -n 1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:03.939 192.168.100.9' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:03.939 192.168.100.9' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # tail -n +2 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # head -n 1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1749982 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1749982 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1749982 ']' 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.939 07:17:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:04.877 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.877 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:28:04.877 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:04.877 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.877 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:05.136 Malloc0 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:28:05.136 07:17:19 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:28:37.258 Fuzzing completed. Shutting down the fuzz application 00:28:37.258 00:28:37.258 Dumping successful admin opcodes: 00:28:37.258 8, 9, 10, 24, 00:28:37.258 Dumping successful io opcodes: 00:28:37.258 0, 9, 00:28:37.258 NS: 0x200003af0ec0 I/O qp, Total commands completed: 801121, total successful commands: 4662, random_seed: 2193784960 00:28:37.258 NS: 0x200003af0ec0 admin qp, Total commands completed: 112208, total successful commands: 920, random_seed: 3138748800 00:28:37.258 07:17:50 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:28:38.196 Fuzzing completed. Shutting down the fuzz application 00:28:38.196 00:28:38.196 Dumping successful admin opcodes: 00:28:38.196 24, 00:28:38.196 Dumping successful io opcodes: 00:28:38.196 00:28:38.196 NS: 0x200003af0ec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1985834876 00:28:38.196 NS: 0x200003af0ec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1985927636 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:38.196 rmmod nvme_rdma 00:28:38.196 rmmod nvme_fabrics 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1749982 ']' 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1749982 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1749982 ']' 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1749982 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1749982 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1749982' 00:28:38.196 killing process with pid 1749982 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1749982 00:28:38.196 07:17:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1749982 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:28:40.102 00:28:40.102 real 0m44.191s 00:28:40.102 user 0m56.407s 00:28:40.102 sys 0m20.443s 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:40.102 ************************************ 00:28:40.102 END TEST nvmf_fuzz 00:28:40.102 ************************************ 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:40.102 ************************************ 00:28:40.102 START TEST nvmf_multiconnection 00:28:40.102 ************************************ 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:28:40.102 * Looking for test storage... 00:28:40.102 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:28:40.102 07:17:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:48.225 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:48.225 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:48.225 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:48.225 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # rdma_device_init 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # uname 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:48.225 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:48.226 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:48.226 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:48.226 altname enp217s0f0np0 00:28:48.226 altname ens818f0np0 00:28:48.226 inet 192.168.100.8/24 scope global mlx_0_0 00:28:48.226 valid_lft forever preferred_lft forever 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:48.226 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:48.226 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:48.226 altname enp217s0f1np1 00:28:48.226 altname ens818f1np1 00:28:48.226 inet 192.168.100.9/24 scope global mlx_0_1 00:28:48.226 valid_lft forever preferred_lft forever 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:48.226 192.168.100.9' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:48.226 192.168.100.9' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # head -n 1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:48.226 192.168.100.9' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # tail -n +2 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # head -n 1 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:28:48.226 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1760024 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1760024 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1760024 ']' 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:48.227 07:18:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:48.227 [2024-07-24 07:18:02.692717] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:28:48.227 [2024-07-24 07:18:02.692811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.227 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.227 [2024-07-24 07:18:02.843561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.486 [2024-07-24 07:18:03.057931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.486 [2024-07-24 07:18:03.057974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.486 [2024-07-24 07:18:03.057989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.486 [2024-07-24 07:18:03.058001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.486 [2024-07-24 07:18:03.058013] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.486 [2024-07-24 07:18:03.060662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.486 [2024-07-24 07:18:03.060684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.486 [2024-07-24 07:18:03.060743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.486 [2024-07-24 07:18:03.060762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.052 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:49.052 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:28:49.052 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:49.052 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:49.052 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.052 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.052 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:49.052 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.052 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.052 [2024-07-24 07:18:03.550640] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f8a3f18c940) succeed. 00:28:49.052 [2024-07-24 07:18:03.560256] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f8a3f148940) succeed. 00:28:49.310 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.310 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:28:49.310 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:49.310 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:49.310 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.310 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.569 Malloc1 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.569 [2024-07-24 07:18:03.990255] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.569 07:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.569 Malloc2 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.569 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.829 Malloc3 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.829 Malloc4 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.829 Malloc5 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.829 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 Malloc6 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 Malloc7 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.089 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.349 Malloc8 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.349 Malloc9 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.349 07:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.608 Malloc10 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.608 Malloc11 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:50.608 07:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:28:51.544 07:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:28:51.544 07:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:28:51.544 07:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:28:51.544 07:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:28:51.544 07:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:28:54.077 07:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:28:54.077 07:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:28:54.077 07:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK1 00:28:54.077 07:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:28:54.077 07:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:28:54.077 07:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:28:54.077 07:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:54.077 07:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:28:54.646 07:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:28:54.646 07:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:28:54.646 07:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:28:54.646 07:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:28:54.646 07:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:28:56.547 07:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:28:56.547 07:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:28:56.547 07:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK2 00:28:56.805 07:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:28:56.805 07:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:28:56.805 07:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:28:56.805 07:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:56.805 07:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:28:57.741 07:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:28:57.741 07:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:28:57.741 07:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:28:57.741 07:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:28:57.741 07:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:28:59.666 07:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:28:59.666 07:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:28:59.666 07:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK3 00:28:59.666 07:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:28:59.666 07:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:28:59.666 07:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:28:59.666 07:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:59.666 07:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:29:00.601 07:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:29:00.601 07:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:29:00.601 07:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:29:00.601 07:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:29:00.601 07:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:29:03.132 07:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:29:03.132 07:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:29:03.132 07:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK4 00:29:03.132 07:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:29:03.132 07:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:29:03.132 07:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:29:03.132 07:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:03.132 07:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:29:03.698 07:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:29:03.698 07:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:29:03.698 07:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:29:03.698 07:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:29:03.698 07:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:29:05.598 07:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:29:05.598 07:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:29:05.598 07:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK5 00:29:05.598 07:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:29:05.598 07:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:29:05.598 07:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:29:05.598 07:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:05.598 07:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:29:06.534 07:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:29:06.534 07:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:29:06.534 07:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:29:06.534 07:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:29:06.534 07:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:29:09.066 07:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:29:09.066 07:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:29:09.066 07:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK6 00:29:09.066 07:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:29:09.066 07:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:29:09.066 07:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:29:09.066 07:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:09.066 07:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:29:09.632 07:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:29:09.632 07:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:29:09.632 07:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:29:09.632 07:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:29:09.632 07:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:29:12.163 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:29:12.163 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:29:12.163 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK7 00:29:12.163 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:29:12.163 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:29:12.163 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:29:12.163 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:12.163 07:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:29:12.730 07:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:29:12.730 07:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:29:12.730 07:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:29:12.730 07:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:29:12.730 07:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:29:14.636 07:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:29:14.636 07:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:29:14.636 07:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK8 00:29:14.636 07:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:29:14.636 07:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:29:14.636 07:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:29:14.636 07:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:14.636 07:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:29:15.572 07:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:29:15.572 07:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:29:15.572 07:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:29:15.572 07:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:29:15.572 07:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:29:18.107 07:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:29:18.107 07:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:29:18.107 07:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK9 00:29:18.107 07:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:29:18.107 07:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:29:18.107 07:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:29:18.107 07:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:18.107 07:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:29:18.716 07:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:29:18.716 07:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:29:18.716 07:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:29:18.716 07:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:29:18.716 07:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:29:20.646 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:29:20.646 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:29:20.646 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK10 00:29:20.646 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:29:20.646 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:29:20.646 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:29:20.646 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:20.646 07:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:29:21.582 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:29:21.582 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:29:21.582 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:29:21.582 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:29:21.582 07:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:29:24.120 07:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:29:24.120 07:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:29:24.120 07:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK11 00:29:24.120 07:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:29:24.120 07:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:29:24.121 07:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:29:24.121 07:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:29:24.121 [global] 00:29:24.121 thread=1 00:29:24.121 invalidate=1 00:29:24.121 rw=read 00:29:24.121 time_based=1 00:29:24.121 runtime=10 00:29:24.121 ioengine=libaio 00:29:24.121 direct=1 00:29:24.121 bs=262144 00:29:24.121 iodepth=64 00:29:24.121 norandommap=1 00:29:24.121 numjobs=1 00:29:24.121 00:29:24.121 [job0] 00:29:24.121 filename=/dev/nvme0n1 00:29:24.121 [job1] 00:29:24.121 filename=/dev/nvme10n1 00:29:24.121 [job2] 00:29:24.121 filename=/dev/nvme1n1 00:29:24.121 [job3] 00:29:24.121 filename=/dev/nvme2n1 00:29:24.121 [job4] 00:29:24.121 filename=/dev/nvme3n1 00:29:24.121 [job5] 00:29:24.121 filename=/dev/nvme4n1 00:29:24.121 [job6] 00:29:24.121 filename=/dev/nvme5n1 00:29:24.121 [job7] 00:29:24.121 filename=/dev/nvme6n1 00:29:24.121 [job8] 00:29:24.121 filename=/dev/nvme7n1 00:29:24.121 [job9] 00:29:24.121 filename=/dev/nvme8n1 00:29:24.121 [job10] 00:29:24.121 filename=/dev/nvme9n1 00:29:24.121 Could not set queue depth (nvme0n1) 00:29:24.121 Could not set queue depth (nvme10n1) 00:29:24.121 Could not set queue depth (nvme1n1) 00:29:24.121 Could not set queue depth (nvme2n1) 00:29:24.121 Could not set queue depth (nvme3n1) 00:29:24.121 Could not set queue depth (nvme4n1) 00:29:24.121 Could not set queue depth (nvme5n1) 00:29:24.121 Could not set queue depth (nvme6n1) 00:29:24.121 Could not set queue depth (nvme7n1) 00:29:24.121 Could not set queue depth (nvme8n1) 00:29:24.121 Could not set queue depth (nvme9n1) 00:29:24.378 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:24.378 fio-3.35 00:29:24.378 Starting 11 threads 00:29:36.587 00:29:36.587 job0: (groupid=0, jobs=1): err= 0: pid=1766739: Wed Jul 24 07:18:49 2024 00:29:36.587 read: IOPS=1296, BW=324MiB/s (340MB/s)(3260MiB/10059msec) 00:29:36.587 slat (usec): min=13, max=22281, avg=763.45, stdev=2067.62 00:29:36.587 clat (msec): min=13, max=118, avg=48.56, stdev=16.95 00:29:36.587 lat (msec): min=13, max=119, avg=49.32, stdev=17.29 00:29:36.587 clat percentiles (msec): 00:29:36.587 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:29:36.587 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 49], 00:29:36.587 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 69], 95.00th=[ 79], 00:29:36.587 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 111], 99.95th=[ 113], 00:29:36.587 | 99.99th=[ 118] 00:29:36.587 bw ( KiB/s): min=158208, max=457216, per=9.95%, avg=332185.60, stdev=108385.45, samples=20 00:29:36.587 iops : min= 618, max= 1786, avg=1297.60, stdev=423.38, samples=20 00:29:36.587 lat (msec) : 20=0.13%, 50=63.49%, 100=34.79%, 250=1.59% 00:29:36.587 cpu : usr=0.40%, sys=6.01%, ctx=2478, majf=0, minf=4097 00:29:36.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:29:36.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.587 issued rwts: total=13039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.587 job1: (groupid=0, jobs=1): err= 0: pid=1766740: Wed Jul 24 07:18:49 2024 00:29:36.587 read: IOPS=845, BW=211MiB/s (222MB/s)(2126MiB/10057msec) 00:29:36.587 slat (usec): min=13, max=34649, avg=1171.82, stdev=3530.82 00:29:36.587 clat (msec): min=10, max=137, avg=74.45, stdev=21.37 00:29:36.587 lat (msec): min=10, max=139, avg=75.62, stdev=21.93 00:29:36.587 clat percentiles (msec): 00:29:36.587 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 52], 20.00th=[ 63], 00:29:36.587 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 66], 60.00th=[ 80], 00:29:36.587 | 70.00th=[ 86], 80.00th=[ 103], 90.00th=[ 105], 95.00th=[ 107], 00:29:36.587 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 138], 99.95th=[ 138], 00:29:36.587 | 99.99th=[ 138] 00:29:36.587 bw ( KiB/s): min=148992, max=404992, per=6.47%, avg=216064.00, stdev=63405.15, samples=20 00:29:36.587 iops : min= 582, max= 1582, avg=844.00, stdev=247.68, samples=20 00:29:36.587 lat (msec) : 20=0.64%, 50=7.73%, 100=67.27%, 250=24.37% 00:29:36.587 cpu : usr=0.47%, sys=3.89%, ctx=1645, majf=0, minf=4097 00:29:36.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:29:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.588 issued rwts: total=8503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.588 job2: (groupid=0, jobs=1): err= 0: pid=1766741: Wed Jul 24 07:18:49 2024 00:29:36.588 read: IOPS=890, BW=223MiB/s (234MB/s)(2240MiB/10059msec) 00:29:36.588 slat (usec): min=11, max=46639, avg=1096.08, stdev=3720.63 00:29:36.588 clat (msec): min=3, max=147, avg=70.67, stdev=27.14 00:29:36.588 lat (msec): min=3, max=153, avg=71.76, stdev=27.78 00:29:36.588 clat percentiles (msec): 00:29:36.588 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 54], 00:29:36.588 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 68], 60.00th=[ 80], 00:29:36.588 | 70.00th=[ 85], 80.00th=[ 102], 90.00th=[ 104], 95.00th=[ 107], 00:29:36.588 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 140], 99.95th=[ 144], 00:29:36.588 | 99.99th=[ 148] 00:29:36.588 bw ( KiB/s): min=150016, max=703488, per=6.83%, avg=227788.80, stdev=119760.19, samples=20 00:29:36.588 iops : min= 586, max= 2748, avg=889.80, stdev=467.81, samples=20 00:29:36.588 lat (msec) : 4=0.11%, 10=0.54%, 20=12.59%, 50=1.35%, 100=62.72% 00:29:36.588 lat (msec) : 250=22.70% 00:29:36.588 cpu : usr=0.30%, sys=3.52%, ctx=1821, majf=0, minf=4097 00:29:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:29:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.588 issued rwts: total=8961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.588 job3: (groupid=0, jobs=1): err= 0: pid=1766742: Wed Jul 24 07:18:49 2024 00:29:36.588 read: IOPS=1486, BW=372MiB/s (390MB/s)(3731MiB/10041msec) 00:29:36.588 slat (usec): min=10, max=22345, avg=649.95, stdev=1759.29 00:29:36.588 clat (usec): min=789, max=114776, avg=42369.09, stdev=12430.62 00:29:36.588 lat (usec): min=832, max=118831, avg=43019.04, stdev=12679.34 00:29:36.588 clat percentiles (msec): 00:29:36.588 | 1.00th=[ 13], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:29:36.588 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 38], 60.00th=[ 47], 00:29:36.588 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 52], 95.00th=[ 56], 00:29:36.588 | 99.00th=[ 103], 99.50th=[ 106], 99.90th=[ 110], 99.95th=[ 112], 00:29:36.588 | 99.99th=[ 112] 00:29:36.588 bw ( KiB/s): min=200704, max=473088, per=11.40%, avg=380441.60, stdev=85447.50, samples=20 00:29:36.588 iops : min= 784, max= 1848, avg=1486.10, stdev=333.78, samples=20 00:29:36.588 lat (usec) : 1000=0.01% 00:29:36.588 lat (msec) : 2=0.05%, 4=0.11%, 10=0.55%, 20=0.86%, 50=84.07% 00:29:36.588 lat (msec) : 100=13.16%, 250=1.19% 00:29:36.588 cpu : usr=0.34%, sys=4.57%, ctx=3161, majf=0, minf=4097 00:29:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:29:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.588 issued rwts: total=14924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.588 job4: (groupid=0, jobs=1): err= 0: pid=1766743: Wed Jul 24 07:18:49 2024 00:29:36.588 read: IOPS=2293, BW=573MiB/s (601MB/s)(5759MiB/10042msec) 00:29:36.588 slat (usec): min=10, max=24162, avg=428.09, stdev=1316.75 00:29:36.588 clat (usec): min=887, max=118020, avg=27444.29, stdev=15865.46 00:29:36.588 lat (usec): min=931, max=119794, avg=27872.38, stdev=16140.53 00:29:36.588 clat percentiles (msec): 00:29:36.588 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 17], 00:29:36.588 | 30.00th=[ 18], 40.00th=[ 18], 50.00th=[ 18], 60.00th=[ 19], 00:29:36.588 | 70.00th=[ 34], 80.00th=[ 47], 90.00th=[ 50], 95.00th=[ 51], 00:29:36.588 | 99.00th=[ 86], 99.50th=[ 105], 99.90th=[ 109], 99.95th=[ 109], 00:29:36.588 | 99.99th=[ 112] 00:29:36.588 bw ( KiB/s): min=319488, max=946688, per=17.62%, avg=588057.60, stdev=278361.04, samples=20 00:29:36.588 iops : min= 1248, max= 3698, avg=2297.10, stdev=1087.35, samples=20 00:29:36.588 lat (usec) : 1000=0.01% 00:29:36.588 lat (msec) : 2=0.13%, 4=0.21%, 10=0.73%, 20=62.40%, 50=30.75% 00:29:36.588 lat (msec) : 100=4.99%, 250=0.79% 00:29:36.588 cpu : usr=0.45%, sys=6.44%, ctx=4809, majf=0, minf=4097 00:29:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.588 issued rwts: total=23034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.588 job5: (groupid=0, jobs=1): err= 0: pid=1766744: Wed Jul 24 07:18:49 2024 00:29:36.588 read: IOPS=786, BW=197MiB/s (206MB/s)(1976MiB/10055msec) 00:29:36.588 slat (usec): min=11, max=32568, avg=1248.41, stdev=3376.09 00:29:36.588 clat (msec): min=13, max=134, avg=80.10, stdev=17.69 00:29:36.588 lat (msec): min=13, max=134, avg=81.35, stdev=18.20 00:29:36.588 clat percentiles (msec): 00:29:36.588 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 64], 00:29:36.588 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 74], 60.00th=[ 82], 00:29:36.588 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 105], 95.00th=[ 108], 00:29:36.588 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 136], 00:29:36.588 | 99.99th=[ 136] 00:29:36.588 bw ( KiB/s): min=151552, max=251904, per=6.01%, avg=200704.00, stdev=42747.68, samples=20 00:29:36.588 iops : min= 592, max= 984, avg=784.00, stdev=166.98, samples=20 00:29:36.588 lat (msec) : 20=0.13%, 50=0.23%, 100=72.04%, 250=27.61% 00:29:36.588 cpu : usr=0.27%, sys=2.88%, ctx=1671, majf=0, minf=4097 00:29:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.588 issued rwts: total=7904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.588 job6: (groupid=0, jobs=1): err= 0: pid=1766745: Wed Jul 24 07:18:49 2024 00:29:36.588 read: IOPS=1295, BW=324MiB/s (340MB/s)(3258MiB/10058msec) 00:29:36.588 slat (usec): min=11, max=25540, avg=763.95, stdev=1977.99 00:29:36.588 clat (msec): min=14, max=129, avg=48.58, stdev=17.12 00:29:36.588 lat (msec): min=14, max=129, avg=49.34, stdev=17.45 00:29:36.588 clat percentiles (msec): 00:29:36.588 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:29:36.588 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 49], 00:29:36.588 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 70], 95.00th=[ 79], 00:29:36.588 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 112], 99.95th=[ 117], 00:29:36.588 | 99.99th=[ 122] 00:29:36.588 bw ( KiB/s): min=153907, max=458752, per=9.95%, avg=332021.75, stdev=109571.87, samples=20 00:29:36.588 iops : min= 601, max= 1792, avg=1296.95, stdev=428.03, samples=20 00:29:36.588 lat (msec) : 20=0.08%, 50=63.70%, 100=34.62%, 250=1.60% 00:29:36.588 cpu : usr=0.31%, sys=4.01%, ctx=2620, majf=0, minf=4097 00:29:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:29:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.588 issued rwts: total=13032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.588 job7: (groupid=0, jobs=1): err= 0: pid=1766746: Wed Jul 24 07:18:49 2024 00:29:36.588 read: IOPS=1554, BW=389MiB/s (408MB/s)(3903MiB/10043msec) 00:29:36.588 slat (usec): min=11, max=12373, avg=637.35, stdev=1486.57 00:29:36.588 clat (usec): min=10189, max=90314, avg=40489.64, stdev=7813.73 00:29:36.588 lat (usec): min=10439, max=90342, avg=41126.98, stdev=7999.22 00:29:36.588 clat percentiles (usec): 00:29:36.588 | 1.00th=[30278], 5.00th=[32375], 10.00th=[32900], 20.00th=[33424], 00:29:36.588 | 30.00th=[34341], 40.00th=[34866], 50.00th=[36439], 60.00th=[45876], 00:29:36.588 | 70.00th=[47449], 80.00th=[48497], 90.00th=[50070], 95.00th=[52167], 00:29:36.588 | 99.00th=[56361], 99.50th=[58459], 99.90th=[80217], 99.95th=[85459], 00:29:36.588 | 99.99th=[90702] 00:29:36.588 bw ( KiB/s): min=321536, max=472064, per=11.93%, avg=398054.40, stdev=65739.36, samples=20 00:29:36.588 iops : min= 1256, max= 1844, avg=1554.90, stdev=256.79, samples=20 00:29:36.588 lat (msec) : 20=0.36%, 50=89.19%, 100=10.45% 00:29:36.588 cpu : usr=0.41%, sys=5.27%, ctx=3025, majf=0, minf=4097 00:29:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:29:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.588 issued rwts: total=15612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.588 job8: (groupid=0, jobs=1): err= 0: pid=1766750: Wed Jul 24 07:18:49 2024 00:29:36.588 read: IOPS=844, BW=211MiB/s (221MB/s)(2123MiB/10057msec) 00:29:36.588 slat (usec): min=17, max=41594, avg=1173.35, stdev=3687.26 00:29:36.588 clat (msec): min=10, max=145, avg=74.54, stdev=21.42 00:29:36.588 lat (msec): min=10, max=147, avg=75.72, stdev=22.01 00:29:36.588 clat percentiles (msec): 00:29:36.588 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 52], 20.00th=[ 63], 00:29:36.588 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 66], 60.00th=[ 80], 00:29:36.588 | 70.00th=[ 86], 80.00th=[ 103], 90.00th=[ 105], 95.00th=[ 107], 00:29:36.588 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 140], 99.95th=[ 142], 00:29:36.588 | 99.99th=[ 146] 00:29:36.588 bw ( KiB/s): min=150016, max=404480, per=6.47%, avg=215782.40, stdev=63123.05, samples=20 00:29:36.588 iops : min= 586, max= 1580, avg=842.90, stdev=246.57, samples=20 00:29:36.588 lat (msec) : 20=0.64%, 50=7.56%, 100=67.33%, 250=24.47% 00:29:36.588 cpu : usr=0.37%, sys=4.20%, ctx=1637, majf=0, minf=4097 00:29:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:29:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.588 issued rwts: total=8492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.589 job9: (groupid=0, jobs=1): err= 0: pid=1766751: Wed Jul 24 07:18:49 2024 00:29:36.589 read: IOPS=926, BW=232MiB/s (243MB/s)(2330MiB/10057msec) 00:29:36.589 slat (usec): min=11, max=63684, avg=1051.24, stdev=4636.24 00:29:36.589 clat (msec): min=10, max=169, avg=67.95, stdev=25.17 00:29:36.589 lat (msec): min=10, max=169, avg=69.00, stdev=25.92 00:29:36.589 clat percentiles (msec): 00:29:36.589 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:29:36.589 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 67], 00:29:36.589 | 70.00th=[ 80], 80.00th=[ 102], 90.00th=[ 104], 95.00th=[ 106], 00:29:36.589 | 99.00th=[ 110], 99.50th=[ 113], 99.90th=[ 161], 99.95th=[ 165], 00:29:36.589 | 99.99th=[ 169] 00:29:36.589 bw ( KiB/s): min=146944, max=427520, per=7.10%, avg=236944.50, stdev=82200.89, samples=20 00:29:36.589 iops : min= 574, max= 1670, avg=925.55, stdev=321.06, samples=20 00:29:36.589 lat (msec) : 20=0.96%, 50=21.94%, 100=55.86%, 250=21.25% 00:29:36.589 cpu : usr=0.23%, sys=2.98%, ctx=2021, majf=0, minf=4097 00:29:36.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:29:36.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.589 issued rwts: total=9318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.589 job10: (groupid=0, jobs=1): err= 0: pid=1766754: Wed Jul 24 07:18:49 2024 00:29:36.589 read: IOPS=826, BW=207MiB/s (217MB/s)(2079MiB/10059msec) 00:29:36.589 slat (usec): min=11, max=38940, avg=1172.46, stdev=3449.81 00:29:36.589 clat (msec): min=11, max=138, avg=76.15, stdev=20.53 00:29:36.589 lat (msec): min=11, max=144, avg=77.33, stdev=21.07 00:29:36.589 clat percentiles (msec): 00:29:36.589 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 64], 00:29:36.589 | 30.00th=[ 66], 40.00th=[ 66], 50.00th=[ 68], 60.00th=[ 77], 00:29:36.589 | 70.00th=[ 88], 80.00th=[ 103], 90.00th=[ 105], 95.00th=[ 107], 00:29:36.589 | 99.00th=[ 114], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 138], 00:29:36.589 | 99.99th=[ 140] 00:29:36.589 bw ( KiB/s): min=148480, max=325120, per=6.33%, avg=211302.40, stdev=52958.85, samples=20 00:29:36.589 iops : min= 580, max= 1270, avg=825.40, stdev=206.87, samples=20 00:29:36.589 lat (msec) : 20=0.22%, 50=12.11%, 100=60.84%, 250=26.84% 00:29:36.589 cpu : usr=0.31%, sys=3.54%, ctx=1774, majf=0, minf=3221 00:29:36.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:36.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:36.589 issued rwts: total=8317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:36.589 00:29:36.589 Run status group 0 (all jobs): 00:29:36.589 READ: bw=3259MiB/s (3417MB/s), 197MiB/s-573MiB/s (206MB/s-601MB/s), io=32.0GiB (34.4GB), run=10041-10059msec 00:29:36.589 00:29:36.589 Disk stats (read/write): 00:29:36.589 nvme0n1: ios=25731/0, merge=0/0, ticks=1220709/0, in_queue=1220709, util=96.77% 00:29:36.589 nvme10n1: ios=16691/0, merge=0/0, ticks=1225362/0, in_queue=1225362, util=97.01% 00:29:36.589 nvme1n1: ios=17568/0, merge=0/0, ticks=1222323/0, in_queue=1222323, util=97.35% 00:29:36.589 nvme2n1: ios=29436/0, merge=0/0, ticks=1219488/0, in_queue=1219488, util=97.52% 00:29:36.589 nvme3n1: ios=45660/0, merge=0/0, ticks=1217831/0, in_queue=1217831, util=97.63% 00:29:36.589 nvme4n1: ios=15429/0, merge=0/0, ticks=1221432/0, in_queue=1221432, util=98.04% 00:29:36.589 nvme5n1: ios=25722/0, merge=0/0, ticks=1216820/0, in_queue=1216820, util=98.23% 00:29:36.589 nvme6n1: ios=30806/0, merge=0/0, ticks=1218499/0, in_queue=1218499, util=98.40% 00:29:36.589 nvme7n1: ios=16669/0, merge=0/0, ticks=1224113/0, in_queue=1224113, util=98.89% 00:29:36.589 nvme8n1: ios=18272/0, merge=0/0, ticks=1218484/0, in_queue=1218484, util=99.13% 00:29:36.589 nvme9n1: ios=16306/0, merge=0/0, ticks=1223204/0, in_queue=1223204, util=99.31% 00:29:36.589 07:18:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:29:36.589 [global] 00:29:36.589 thread=1 00:29:36.589 invalidate=1 00:29:36.589 rw=randwrite 00:29:36.589 time_based=1 00:29:36.589 runtime=10 00:29:36.589 ioengine=libaio 00:29:36.589 direct=1 00:29:36.589 bs=262144 00:29:36.589 iodepth=64 00:29:36.589 norandommap=1 00:29:36.589 numjobs=1 00:29:36.589 00:29:36.589 [job0] 00:29:36.589 filename=/dev/nvme0n1 00:29:36.589 [job1] 00:29:36.589 filename=/dev/nvme10n1 00:29:36.589 [job2] 00:29:36.589 filename=/dev/nvme1n1 00:29:36.589 [job3] 00:29:36.589 filename=/dev/nvme2n1 00:29:36.589 [job4] 00:29:36.589 filename=/dev/nvme3n1 00:29:36.589 [job5] 00:29:36.589 filename=/dev/nvme4n1 00:29:36.589 [job6] 00:29:36.589 filename=/dev/nvme5n1 00:29:36.589 [job7] 00:29:36.589 filename=/dev/nvme6n1 00:29:36.589 [job8] 00:29:36.589 filename=/dev/nvme7n1 00:29:36.589 [job9] 00:29:36.589 filename=/dev/nvme8n1 00:29:36.589 [job10] 00:29:36.589 filename=/dev/nvme9n1 00:29:36.589 Could not set queue depth (nvme0n1) 00:29:36.589 Could not set queue depth (nvme10n1) 00:29:36.589 Could not set queue depth (nvme1n1) 00:29:36.589 Could not set queue depth (nvme2n1) 00:29:36.589 Could not set queue depth (nvme3n1) 00:29:36.589 Could not set queue depth (nvme4n1) 00:29:36.589 Could not set queue depth (nvme5n1) 00:29:36.589 Could not set queue depth (nvme6n1) 00:29:36.589 Could not set queue depth (nvme7n1) 00:29:36.589 Could not set queue depth (nvme8n1) 00:29:36.589 Could not set queue depth (nvme9n1) 00:29:36.589 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:36.589 fio-3.35 00:29:36.589 Starting 11 threads 00:29:46.572 00:29:46.572 job0: (groupid=0, jobs=1): err= 0: pid=1768478: Wed Jul 24 07:19:00 2024 00:29:46.572 write: IOPS=796, BW=199MiB/s (209MB/s)(2003MiB/10059msec); 0 zone resets 00:29:46.572 slat (usec): min=29, max=15900, avg=1242.39, stdev=2127.18 00:29:46.572 clat (msec): min=12, max=135, avg=79.07, stdev=10.32 00:29:46.572 lat (msec): min=12, max=135, avg=80.31, stdev=10.31 00:29:46.572 clat percentiles (msec): 00:29:46.572 | 1.00th=[ 55], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 77], 00:29:46.572 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 81], 60.00th=[ 82], 00:29:46.572 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 84], 95.00th=[ 94], 00:29:46.572 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 131], 00:29:46.572 | 99.99th=[ 136] 00:29:46.572 bw ( KiB/s): min=164864, max=265234, per=6.23%, avg=203486.10, stdev=21480.01, samples=20 00:29:46.572 iops : min= 644, max= 1036, avg=794.80, stdev=83.90, samples=20 00:29:46.572 lat (msec) : 20=0.15%, 50=0.40%, 100=96.08%, 250=3.37% 00:29:46.572 cpu : usr=2.11%, sys=3.59%, ctx=2023, majf=0, minf=146 00:29:46.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:46.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.572 issued rwts: total=0,8013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.572 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.572 job1: (groupid=0, jobs=1): err= 0: pid=1768490: Wed Jul 24 07:19:00 2024 00:29:46.572 write: IOPS=803, BW=201MiB/s (211MB/s)(2023MiB/10064msec); 0 zone resets 00:29:46.572 slat (usec): min=18, max=9782, avg=1230.38, stdev=2092.33 00:29:46.572 clat (msec): min=3, max=141, avg=78.36, stdev= 7.71 00:29:46.572 lat (msec): min=3, max=141, avg=79.59, stdev= 7.57 00:29:46.572 clat percentiles (msec): 00:29:46.572 | 1.00th=[ 57], 5.00th=[ 64], 10.00th=[ 74], 20.00th=[ 77], 00:29:46.572 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:29:46.572 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 83], 95.00th=[ 85], 00:29:46.572 | 99.00th=[ 94], 99.50th=[ 100], 99.90th=[ 132], 99.95th=[ 136], 00:29:46.572 | 99.99th=[ 142] 00:29:46.572 bw ( KiB/s): min=198656, max=243712, per=6.29%, avg=205430.05, stdev=9273.42, samples=20 00:29:46.572 iops : min= 776, max= 952, avg=802.40, stdev=36.24, samples=20 00:29:46.572 lat (msec) : 4=0.05%, 10=0.22%, 20=0.14%, 50=0.35%, 100=98.83% 00:29:46.572 lat (msec) : 250=0.42% 00:29:46.572 cpu : usr=2.08%, sys=3.67%, ctx=2048, majf=0, minf=140 00:29:46.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:46.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.572 issued rwts: total=0,8090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.572 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.572 job2: (groupid=0, jobs=1): err= 0: pid=1768491: Wed Jul 24 07:19:00 2024 00:29:46.572 write: IOPS=800, BW=200MiB/s (210MB/s)(2008MiB/10033msec); 0 zone resets 00:29:46.572 slat (usec): min=23, max=16860, avg=1220.21, stdev=2213.48 00:29:46.572 clat (msec): min=12, max=127, avg=78.72, stdev=11.66 00:29:46.572 lat (msec): min=12, max=131, avg=79.94, stdev=11.72 00:29:46.572 clat percentiles (msec): 00:29:46.572 | 1.00th=[ 34], 5.00th=[ 64], 10.00th=[ 75], 20.00th=[ 77], 00:29:46.572 | 30.00th=[ 80], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:29:46.572 | 70.00th=[ 82], 80.00th=[ 82], 90.00th=[ 84], 95.00th=[ 92], 00:29:46.572 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 126], 99.95th=[ 127], 00:29:46.572 | 99.99th=[ 128] 00:29:46.572 bw ( KiB/s): min=168960, max=264704, per=6.24%, avg=203913.05, stdev=16578.54, samples=20 00:29:46.572 iops : min= 660, max= 1034, avg=796.45, stdev=64.78, samples=20 00:29:46.572 lat (msec) : 20=0.14%, 50=4.42%, 100=92.71%, 250=2.73% 00:29:46.572 cpu : usr=1.91%, sys=3.64%, ctx=2082, majf=0, minf=12 00:29:46.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:46.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.572 issued rwts: total=0,8030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.572 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.572 job3: (groupid=0, jobs=1): err= 0: pid=1768492: Wed Jul 24 07:19:00 2024 00:29:46.572 write: IOPS=801, BW=200MiB/s (210MB/s)(2016MiB/10059msec); 0 zone resets 00:29:46.572 slat (usec): min=28, max=9503, avg=1234.54, stdev=2097.66 00:29:46.572 clat (msec): min=12, max=135, avg=78.58, stdev= 6.52 00:29:46.572 lat (msec): min=12, max=135, avg=79.81, stdev= 6.31 00:29:46.572 clat percentiles (msec): 00:29:46.572 | 1.00th=[ 58], 5.00th=[ 66], 10.00th=[ 74], 20.00th=[ 77], 00:29:46.572 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:29:46.572 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 83], 95.00th=[ 85], 00:29:46.572 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 127], 99.95th=[ 131], 00:29:46.572 | 99.99th=[ 136] 00:29:46.572 bw ( KiB/s): min=189306, max=246784, per=6.27%, avg=204757.85, stdev=10527.04, samples=20 00:29:46.572 iops : min= 739, max= 964, avg=799.75, stdev=41.17, samples=20 00:29:46.572 lat (msec) : 20=0.10%, 50=0.31%, 100=99.22%, 250=0.37% 00:29:46.572 cpu : usr=2.04%, sys=3.91%, ctx=2042, majf=0, minf=273 00:29:46.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:46.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.572 issued rwts: total=0,8063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.572 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.572 job4: (groupid=0, jobs=1): err= 0: pid=1768493: Wed Jul 24 07:19:00 2024 00:29:46.572 write: IOPS=3182, BW=796MiB/s (834MB/s)(7983MiB/10034msec); 0 zone resets 00:29:46.572 slat (usec): min=15, max=55540, avg=307.30, stdev=664.92 00:29:46.572 clat (usec): min=1079, max=128747, avg=19797.46, stdev=5344.28 00:29:46.572 lat (usec): min=1135, max=141900, avg=20104.76, stdev=5408.87 00:29:46.572 clat percentiles (msec): 00:29:46.572 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 19], 00:29:46.572 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 20], 60.00th=[ 20], 00:29:46.572 | 70.00th=[ 20], 80.00th=[ 21], 90.00th=[ 21], 95.00th=[ 21], 00:29:46.572 | 99.00th=[ 40], 99.50th=[ 43], 99.90th=[ 99], 99.95th=[ 110], 00:29:46.572 | 99.99th=[ 127] 00:29:46.572 bw ( KiB/s): min=551936, max=854016, per=24.97%, avg=815542.95, stdev=75767.31, samples=20 00:29:46.572 iops : min= 2156, max= 3336, avg=3185.65, stdev=295.94, samples=20 00:29:46.572 lat (msec) : 2=0.03%, 4=0.10%, 10=0.41%, 20=80.25%, 50=18.81% 00:29:46.572 lat (msec) : 100=0.33%, 250=0.07% 00:29:46.572 cpu : usr=4.19%, sys=6.79%, ctx=6927, majf=0, minf=206 00:29:46.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:46.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.572 issued rwts: total=0,31930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.572 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.572 job5: (groupid=0, jobs=1): err= 0: pid=1768494: Wed Jul 24 07:19:00 2024 00:29:46.572 write: IOPS=1686, BW=422MiB/s (442MB/s)(4224MiB/10015msec); 0 zone resets 00:29:46.572 slat (usec): min=21, max=10443, avg=588.22, stdev=1093.58 00:29:46.572 clat (usec): min=7093, max=67634, avg=37336.73, stdev=8037.57 00:29:46.572 lat (usec): min=7122, max=67702, avg=37924.94, stdev=8116.67 00:29:46.572 clat percentiles (usec): 00:29:46.572 | 1.00th=[18482], 5.00th=[19530], 10.00th=[20317], 20.00th=[36963], 00:29:46.572 | 30.00th=[38011], 40.00th=[39060], 50.00th=[39060], 60.00th=[39584], 00:29:46.572 | 70.00th=[40109], 80.00th=[40109], 90.00th=[41157], 95.00th=[43254], 00:29:46.572 | 99.00th=[58983], 99.50th=[60031], 99.90th=[63701], 99.95th=[65274], 00:29:46.572 | 99.99th=[65799] 00:29:46.572 bw ( KiB/s): min=315904, max=596992, per=12.61%, avg=411769.00, stdev=52554.54, samples=19 00:29:46.572 iops : min= 1234, max= 2332, avg=1608.32, stdev=205.34, samples=19 00:29:46.572 lat (msec) : 10=0.03%, 20=8.55%, 50=87.25%, 100=4.17% 00:29:46.572 cpu : usr=3.37%, sys=5.73%, ctx=4075, majf=0, minf=12 00:29:46.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:29:46.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.573 issued rwts: total=0,16895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.573 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.573 job6: (groupid=0, jobs=1): err= 0: pid=1768495: Wed Jul 24 07:19:00 2024 00:29:46.573 write: IOPS=1501, BW=375MiB/s (394MB/s)(3777MiB/10059msec); 0 zone resets 00:29:46.573 slat (usec): min=22, max=65042, avg=646.50, stdev=1328.05 00:29:46.573 clat (msec): min=4, max=179, avg=41.95, stdev=11.26 00:29:46.573 lat (msec): min=4, max=179, avg=42.59, stdev=11.36 00:29:46.573 clat percentiles (msec): 00:29:46.573 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:29:46.573 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 40], 00:29:46.573 | 70.00th=[ 41], 80.00th=[ 41], 90.00th=[ 43], 95.00th=[ 58], 00:29:46.573 | 99.00th=[ 114], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 138], 00:29:46.573 | 99.99th=[ 180] 00:29:46.573 bw ( KiB/s): min=200704, max=416256, per=11.79%, avg=385062.40, stdev=58501.25, samples=20 00:29:46.573 iops : min= 784, max= 1626, avg=1504.00, stdev=228.49, samples=20 00:29:46.573 lat (msec) : 10=0.04%, 20=0.11%, 50=91.57%, 100=7.07%, 250=1.21% 00:29:46.573 cpu : usr=3.43%, sys=5.08%, ctx=3828, majf=0, minf=11 00:29:46.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:29:46.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.573 issued rwts: total=0,15108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.573 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.573 job7: (groupid=0, jobs=1): err= 0: pid=1768496: Wed Jul 24 07:19:00 2024 00:29:46.573 write: IOPS=800, BW=200MiB/s (210MB/s)(2014MiB/10067msec); 0 zone resets 00:29:46.573 slat (usec): min=22, max=15245, avg=1236.52, stdev=2174.50 00:29:46.573 clat (msec): min=3, max=142, avg=78.73, stdev=11.71 00:29:46.573 lat (msec): min=3, max=142, avg=79.97, stdev=11.73 00:29:46.573 clat percentiles (msec): 00:29:46.573 | 1.00th=[ 43], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 77], 00:29:46.573 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:29:46.573 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 84], 95.00th=[ 94], 00:29:46.573 | 99.00th=[ 118], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 140], 00:29:46.573 | 99.99th=[ 142] 00:29:46.573 bw ( KiB/s): min=163840, max=285696, per=6.26%, avg=204574.90, stdev=24898.24, samples=20 00:29:46.573 iops : min= 640, max= 1116, avg=799.10, stdev=97.26, samples=20 00:29:46.573 lat (msec) : 4=0.06%, 10=0.30%, 20=0.26%, 50=0.51%, 100=95.22% 00:29:46.573 lat (msec) : 250=3.65% 00:29:46.573 cpu : usr=1.81%, sys=3.82%, ctx=2022, majf=0, minf=76 00:29:46.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:46.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.573 issued rwts: total=0,8055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.573 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.573 job8: (groupid=0, jobs=1): err= 0: pid=1768500: Wed Jul 24 07:19:00 2024 00:29:46.573 write: IOPS=797, BW=199MiB/s (209MB/s)(2005MiB/10059msec); 0 zone resets 00:29:46.573 slat (usec): min=27, max=14854, avg=1232.34, stdev=2156.33 00:29:46.573 clat (msec): min=12, max=138, avg=79.00, stdev=10.57 00:29:46.573 lat (msec): min=12, max=138, avg=80.23, stdev=10.59 00:29:46.573 clat percentiles (msec): 00:29:46.573 | 1.00th=[ 54], 5.00th=[ 60], 10.00th=[ 63], 20.00th=[ 77], 00:29:46.573 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 81], 60.00th=[ 82], 00:29:46.573 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 84], 95.00th=[ 94], 00:29:46.573 | 99.00th=[ 117], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 136], 00:29:46.573 | 99.99th=[ 140] 00:29:46.573 bw ( KiB/s): min=163328, max=265234, per=6.24%, avg=203690.95, stdev=21764.42, samples=20 00:29:46.573 iops : min= 638, max= 1036, avg=795.60, stdev=85.02, samples=20 00:29:46.573 lat (msec) : 20=0.15%, 50=0.67%, 100=95.77%, 250=3.40% 00:29:46.573 cpu : usr=2.08%, sys=3.56%, ctx=2052, majf=0, minf=145 00:29:46.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:46.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.573 issued rwts: total=0,8021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.573 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.573 job9: (groupid=0, jobs=1): err= 0: pid=1768502: Wed Jul 24 07:19:00 2024 00:29:46.573 write: IOPS=801, BW=200MiB/s (210MB/s)(2016MiB/10059msec); 0 zone resets 00:29:46.573 slat (usec): min=27, max=13794, avg=1234.54, stdev=2126.65 00:29:46.573 clat (msec): min=17, max=135, avg=78.58, stdev= 6.38 00:29:46.573 lat (msec): min=18, max=135, avg=79.81, stdev= 6.16 00:29:46.573 clat percentiles (msec): 00:29:46.573 | 1.00th=[ 58], 5.00th=[ 66], 10.00th=[ 74], 20.00th=[ 77], 00:29:46.573 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:29:46.573 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 83], 95.00th=[ 85], 00:29:46.573 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 127], 99.95th=[ 131], 00:29:46.573 | 99.99th=[ 136] 00:29:46.573 bw ( KiB/s): min=187254, max=244224, per=6.27%, avg=204757.35, stdev=10244.75, samples=20 00:29:46.573 iops : min= 731, max= 954, avg=799.75, stdev=40.06, samples=20 00:29:46.573 lat (msec) : 20=0.07%, 50=0.30%, 100=99.22%, 250=0.41% 00:29:46.573 cpu : usr=1.93%, sys=3.70%, ctx=2041, majf=0, minf=138 00:29:46.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:46.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.573 issued rwts: total=0,8063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.573 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.573 job10: (groupid=0, jobs=1): err= 0: pid=1768503: Wed Jul 24 07:19:00 2024 00:29:46.573 write: IOPS=811, BW=203MiB/s (213MB/s)(2042MiB/10058msec); 0 zone resets 00:29:46.573 slat (usec): min=21, max=15262, avg=1206.44, stdev=2115.13 00:29:46.573 clat (msec): min=8, max=136, avg=77.59, stdev=13.06 00:29:46.573 lat (msec): min=8, max=136, avg=78.80, stdev=13.15 00:29:46.573 clat percentiles (msec): 00:29:46.573 | 1.00th=[ 37], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 77], 00:29:46.573 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:29:46.573 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 84], 95.00th=[ 93], 00:29:46.573 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 128], 99.95th=[ 133], 00:29:46.573 | 99.99th=[ 138] 00:29:46.573 bw ( KiB/s): min=165376, max=333979, per=6.35%, avg=207409.80, stdev=34491.57, samples=20 00:29:46.573 iops : min= 646, max= 1304, avg=810.10, stdev=134.63, samples=20 00:29:46.573 lat (msec) : 10=0.05%, 20=0.15%, 50=4.32%, 100=92.21%, 250=3.27% 00:29:46.573 cpu : usr=1.86%, sys=3.70%, ctx=2085, majf=0, minf=11 00:29:46.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:46.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:46.573 issued rwts: total=0,8166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.573 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:46.573 00:29:46.573 Run status group 0 (all jobs): 00:29:46.573 WRITE: bw=3189MiB/s (3344MB/s), 199MiB/s-796MiB/s (209MB/s-834MB/s), io=31.4GiB (33.7GB), run=10015-10067msec 00:29:46.573 00:29:46.573 Disk stats (read/write): 00:29:46.573 nvme0n1: ios=49/15694, merge=0/0, ticks=11/1214379, in_queue=1214390, util=96.58% 00:29:46.573 nvme10n1: ios=0/15858, merge=0/0, ticks=0/1221061, in_queue=1221061, util=96.84% 00:29:46.573 nvme1n1: ios=0/15508, merge=0/0, ticks=0/1212685, in_queue=1212685, util=97.11% 00:29:46.573 nvme2n1: ios=0/15794, merge=0/0, ticks=0/1217345, in_queue=1217345, util=97.28% 00:29:46.573 nvme3n1: ios=0/63308, merge=0/0, ticks=0/1216663, in_queue=1216663, util=97.38% 00:29:46.573 nvme4n1: ios=0/32821, merge=0/0, ticks=0/1221193, in_queue=1221193, util=97.76% 00:29:46.573 nvme5n1: ios=0/29880, merge=0/0, ticks=0/1219215, in_queue=1219215, util=97.94% 00:29:46.573 nvme6n1: ios=0/15794, merge=0/0, ticks=0/1212246, in_queue=1212246, util=98.19% 00:29:46.573 nvme7n1: ios=0/15713, merge=0/0, ticks=0/1216055, in_queue=1216055, util=98.58% 00:29:46.573 nvme8n1: ios=0/15792, merge=0/0, ticks=0/1212994, in_queue=1212994, util=98.82% 00:29:46.573 nvme9n1: ios=0/16002, merge=0/0, ticks=0/1209964, in_queue=1209964, util=98.99% 00:29:46.573 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:29:46.573 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:29:46.573 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:46.573 07:19:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:46.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK1 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK1 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:46.832 07:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:29:47.767 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:29:47.767 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:29:47.767 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:47.767 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:47.767 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK2 00:29:47.767 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:47.767 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK2 00:29:48.025 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:48.025 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:48.025 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.025 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:48.025 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.025 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:48.025 07:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:29:48.960 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:29:48.960 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:29:48.960 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:48.960 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:48.960 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK3 00:29:48.960 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:48.960 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK3 00:29:48.960 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:48.960 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:48.961 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.961 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:48.961 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.961 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:48.961 07:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:29:49.896 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK4 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK4 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:49.896 07:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:29:50.833 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK5 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK5 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:50.833 07:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:29:51.807 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK6 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK6 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.807 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:52.066 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.066 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:52.066 07:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:29:53.002 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK7 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK7 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:53.002 07:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:29:53.939 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK8 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK8 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:53.939 07:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:29:54.876 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK9 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK9 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:54.876 07:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:29:55.813 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:29:55.813 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK10 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK10 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:55.814 07:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:29:56.750 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:29:56.750 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:29:56.750 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:29:56.750 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:29:56.750 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK11 00:29:56.750 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK11 00:29:56.750 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:29:57.009 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:57.010 rmmod nvme_rdma 00:29:57.010 rmmod nvme_fabrics 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1760024 ']' 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1760024 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1760024 ']' 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1760024 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1760024 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1760024' 00:29:57.010 killing process with pid 1760024 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1760024 00:29:57.010 07:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1760024 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:01.208 00:30:01.208 real 1m21.032s 00:30:01.208 user 5m7.243s 00:30:01.208 sys 0m19.553s 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:01.208 ************************************ 00:30:01.208 END TEST nvmf_multiconnection 00:30:01.208 ************************************ 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:01.208 ************************************ 00:30:01.208 START TEST nvmf_initiator_timeout 00:30:01.208 ************************************ 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:30:01.208 * Looking for test storage... 00:30:01.208 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:30:01.208 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:30:01.209 07:19:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.326 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:09.327 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:09.327 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:09.327 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:09.327 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # rdma_device_init 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # uname 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:09.327 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:09.327 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:09.327 altname enp217s0f0np0 00:30:09.327 altname ens818f0np0 00:30:09.327 inet 192.168.100.8/24 scope global mlx_0_0 00:30:09.327 valid_lft forever preferred_lft forever 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:09.327 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:09.327 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:09.327 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:09.328 altname enp217s0f1np1 00:30:09.328 altname ens818f1np1 00:30:09.328 inet 192.168.100.9/24 scope global mlx_0_1 00:30:09.328 valid_lft forever preferred_lft forever 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:09.328 192.168.100.9' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:09.328 192.168.100.9' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # head -n 1 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:09.328 192.168.100.9' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # tail -n +2 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # head -n 1 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1776489 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1776489 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1776489 ']' 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:09.328 07:19:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:09.328 [2024-07-24 07:19:23.623347] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:30:09.328 [2024-07-24 07:19:23.623438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.328 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.328 [2024-07-24 07:19:23.768486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:09.586 [2024-07-24 07:19:23.977223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.586 [2024-07-24 07:19:23.977264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.586 [2024-07-24 07:19:23.977280] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.586 [2024-07-24 07:19:23.977291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.586 [2024-07-24 07:19:23.977303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.586 [2024-07-24 07:19:23.977421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.586 [2024-07-24 07:19:23.977495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.586 [2024-07-24 07:19:23.977590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.586 [2024-07-24 07:19:23.977601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.843 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:10.101 Malloc0 00:30:10.101 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.101 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:30:10.101 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.101 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:10.101 Delay0 00:30:10.101 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.101 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:10.101 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.101 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:10.101 [2024-07-24 07:19:24.552841] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a040/0x7f72077d4940) succeed. 00:30:10.101 [2024-07-24 07:19:24.562518] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a1c0/0x7f720778e940) succeed. 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:10.359 [2024-07-24 07:19:24.908661] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.359 07:19:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:30:11.293 07:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:30:11.293 07:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # local i=0 00:30:11.293 07:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:30:11.293 07:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:30:11.293 07:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # sleep 2 00:30:13.821 07:19:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:30:13.821 07:19:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:30:13.821 07:19:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:30:13.821 07:19:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:30:13.821 07:19:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:30:13.821 07:19:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # return 0 00:30:13.821 07:19:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1777142 00:30:13.821 07:19:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:30:13.821 07:19:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:30:13.821 [global] 00:30:13.821 thread=1 00:30:13.821 invalidate=1 00:30:13.821 rw=write 00:30:13.821 time_based=1 00:30:13.821 runtime=60 00:30:13.821 ioengine=libaio 00:30:13.821 direct=1 00:30:13.821 bs=4096 00:30:13.821 iodepth=1 00:30:13.821 norandommap=0 00:30:13.821 numjobs=1 00:30:13.821 00:30:13.821 verify_dump=1 00:30:13.821 verify_backlog=512 00:30:13.821 verify_state_save=0 00:30:13.821 do_verify=1 00:30:13.821 verify=crc32c-intel 00:30:13.821 [job0] 00:30:13.821 filename=/dev/nvme0n1 00:30:13.821 Could not set queue depth (nvme0n1) 00:30:13.821 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:13.821 fio-3.35 00:30:13.821 Starting 1 thread 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:16.379 true 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:16.379 true 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:16.379 true 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:16.379 true 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.379 07:19:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:19.659 true 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:19.659 true 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:19.659 true 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:19.659 true 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:30:19.659 07:19:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1777142 00:31:15.874 00:31:15.874 job0: (groupid=0, jobs=1): err= 0: pid=1777394: Wed Jul 24 07:20:28 2024 00:31:15.874 read: IOPS=1182, BW=4731KiB/s (4845kB/s)(277MiB/60000msec) 00:31:15.874 slat (usec): min=5, max=13727, avg= 9.47, stdev=68.19 00:31:15.874 clat (usec): min=77, max=42251k, avg=708.98, stdev=158601.45 00:31:15.874 lat (usec): min=96, max=42251k, avg=718.45, stdev=158601.46 00:31:15.874 clat percentiles (usec): 00:31:15.874 | 1.00th=[ 99], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 108], 00:31:15.874 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 116], 00:31:15.874 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 124], 95.00th=[ 127], 00:31:15.874 | 99.00th=[ 133], 99.50th=[ 137], 99.90th=[ 143], 99.95th=[ 157], 00:31:15.874 | 99.99th=[ 297] 00:31:15.874 write: IOPS=1186, BW=4745KiB/s (4858kB/s)(278MiB/60000msec); 0 zone resets 00:31:15.874 slat (usec): min=6, max=1020, avg=11.45, stdev= 4.72 00:31:15.874 clat (usec): min=79, max=312, avg=110.55, stdev= 7.54 00:31:15.874 lat (usec): min=95, max=1139, avg=122.00, stdev= 9.04 00:31:15.874 clat percentiles (usec): 00:31:15.874 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 101], 20.00th=[ 104], 00:31:15.874 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:31:15.874 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 121], 95.00th=[ 124], 00:31:15.874 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 139], 99.95th=[ 143], 00:31:15.874 | 99.99th=[ 196] 00:31:15.874 bw ( KiB/s): min= 4096, max=16384, per=100.00%, avg=15444.22, stdev=2764.93, samples=36 00:31:15.874 iops : min= 1024, max= 4096, avg=3861.06, stdev=691.23, samples=36 00:31:15.874 lat (usec) : 100=4.04%, 250=95.95%, 500=0.01% 00:31:15.874 lat (msec) : >=2000=0.01% 00:31:15.874 cpu : usr=1.62%, sys=3.19%, ctx=142143, majf=0, minf=106 00:31:15.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.874 issued rwts: total=70966,71168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:15.874 00:31:15.874 Run status group 0 (all jobs): 00:31:15.874 READ: bw=4731KiB/s (4845kB/s), 4731KiB/s-4731KiB/s (4845kB/s-4845kB/s), io=277MiB (291MB), run=60000-60000msec 00:31:15.874 WRITE: bw=4745KiB/s (4858kB/s), 4745KiB/s-4745KiB/s (4858kB/s-4858kB/s), io=278MiB (292MB), run=60000-60000msec 00:31:15.874 00:31:15.874 Disk stats (read/write): 00:31:15.874 nvme0n1: ios=70973/70658, merge=0/0, ticks=7704/7562, in_queue=15266, util=99.60% 00:31:15.875 07:20:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:15.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # local i=0 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # return 0 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:31:15.875 nvmf hotplug test: fio successful as expected 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:15.875 rmmod nvme_rdma 00:31:15.875 rmmod nvme_fabrics 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1776489 ']' 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1776489 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1776489 ']' 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1776489 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1776489 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1776489' 00:31:15.875 killing process with pid 1776489 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1776489 00:31:15.875 07:20:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1776489 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:17.252 00:31:17.252 real 1m15.936s 00:31:17.252 user 4m38.638s 00:31:17.252 sys 0m8.696s 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:17.252 ************************************ 00:31:17.252 END TEST nvmf_initiator_timeout 00:31:17.252 ************************************ 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' rdma = tcp ']' 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # [[ rdma == \r\d\m\a ]] 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@61 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:17.252 ************************************ 00:31:17.252 START TEST nvmf_srq_overwhelm 00:31:17.252 ************************************ 00:31:17.252 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:31:17.252 * Looking for test storage... 00:31:17.252 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:31:17.253 07:20:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:25.368 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:25.369 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:25.369 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:25.369 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:25.369 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:25.369 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:25.370 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:25.370 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:25.370 altname enp217s0f0np0 00:31:25.370 altname ens818f0np0 00:31:25.370 inet 192.168.100.8/24 scope global mlx_0_0 00:31:25.370 valid_lft forever preferred_lft forever 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:25.370 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:25.370 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:25.370 altname enp217s0f1np1 00:31:25.370 altname ens818f1np1 00:31:25.370 inet 192.168.100.9/24 scope global mlx_0_1 00:31:25.370 valid_lft forever preferred_lft forever 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:25.370 07:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:25.629 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:25.630 192.168.100.9' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:25.630 192.168.100.9' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:25.630 192.168.100.9' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=1791542 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 1791542 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@829 -- # '[' -z 1791542 ']' 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:25.630 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:25.630 [2024-07-24 07:20:40.157535] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:31:25.630 [2024-07-24 07:20:40.157634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.630 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.889 [2024-07-24 07:20:40.305098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:25.889 [2024-07-24 07:20:40.514051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.889 [2024-07-24 07:20:40.514097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.889 [2024-07-24 07:20:40.514111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.889 [2024-07-24 07:20:40.514122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.889 [2024-07-24 07:20:40.514150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.889 [2024-07-24 07:20:40.514226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.889 [2024-07-24 07:20:40.514297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.889 [2024-07-24 07:20:40.514320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.889 [2024-07-24 07:20:40.514333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:26.456 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:26.456 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # return 0 00:31:26.456 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:26.456 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:26.456 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:26.456 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.456 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:31:26.456 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.456 07:20:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:26.456 [2024-07-24 07:20:41.007856] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f79a512f940) succeed. 00:31:26.456 [2024-07-24 07:20:41.017447] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f79a50e9940) succeed. 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:26.715 Malloc0 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:26.715 [2024-07-24 07:20:41.230704] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.715 07:20:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # local i=0 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # lsblk -l -o NAME 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # grep -q -w nvme0n1 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # return 0 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:27.709 Malloc1 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.709 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:27.968 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.968 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:27.968 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.968 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:27.968 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.968 07:20:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # local i=0 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # lsblk -l -o NAME 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # grep -q -w nvme1n1 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # return 0 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:28.901 Malloc2 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.901 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:31:28.902 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.902 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:28.902 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.902 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:31:28.902 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.902 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:28.902 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.902 07:20:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # local i=0 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # lsblk -l -o NAME 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # grep -q -w nvme2n1 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # return 0 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.833 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:30.091 Malloc3 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.091 07:20:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # local i=0 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # lsblk -l -o NAME 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # grep -q -w nvme3n1 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # return 0 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:31:31.022 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.023 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:31.280 Malloc4 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.280 07:20:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # local i=0 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # lsblk -l -o NAME 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # grep -q -w nvme4n1 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # return 0 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:32.215 Malloc5 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.215 07:20:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:31:33.604 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:31:33.604 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # local i=0 00:31:33.604 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # lsblk -l -o NAME 00:31:33.604 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # grep -q -w nvme5n1 00:31:33.604 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:31:33.604 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:33.604 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # return 0 00:31:33.604 07:20:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:31:33.604 [global] 00:31:33.604 thread=1 00:31:33.604 invalidate=1 00:31:33.604 rw=read 00:31:33.604 time_based=1 00:31:33.604 runtime=10 00:31:33.604 ioengine=libaio 00:31:33.604 direct=1 00:31:33.604 bs=1048576 00:31:33.604 iodepth=128 00:31:33.604 norandommap=1 00:31:33.604 numjobs=13 00:31:33.604 00:31:33.604 [job0] 00:31:33.604 filename=/dev/nvme0n1 00:31:33.604 [job1] 00:31:33.604 filename=/dev/nvme1n1 00:31:33.604 [job2] 00:31:33.604 filename=/dev/nvme2n1 00:31:33.604 [job3] 00:31:33.604 filename=/dev/nvme3n1 00:31:33.604 [job4] 00:31:33.604 filename=/dev/nvme4n1 00:31:33.604 [job5] 00:31:33.604 filename=/dev/nvme5n1 00:31:33.604 Could not set queue depth (nvme0n1) 00:31:33.604 Could not set queue depth (nvme1n1) 00:31:33.604 Could not set queue depth (nvme2n1) 00:31:33.604 Could not set queue depth (nvme3n1) 00:31:33.604 Could not set queue depth (nvme4n1) 00:31:33.604 Could not set queue depth (nvme5n1) 00:31:33.866 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:31:33.866 ... 00:31:33.866 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:31:33.866 ... 00:31:33.866 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:31:33.866 ... 00:31:33.866 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:31:33.866 ... 00:31:33.866 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:31:33.866 ... 00:31:33.866 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:31:33.866 ... 00:31:33.866 fio-3.35 00:31:33.866 Starting 78 threads 00:31:48.727 00:31:48.727 job0: (groupid=0, jobs=1): err= 0: pid=1793140: Wed Jul 24 07:21:01 2024 00:31:48.727 read: IOPS=15, BW=15.1MiB/s (15.8MB/s)(163MiB/10791msec) 00:31:48.727 slat (usec): min=715, max=2145.4k, avg=66184.77, stdev=320092.18 00:31:48.727 clat (usec): min=1349, max=10531k, avg=7744778.54, stdev=2983932.03 00:31:48.727 lat (msec): min=1877, max=10541, avg=7810.96, stdev=2927.37 00:31:48.727 clat percentiles (msec): 00:31:48.727 | 1.00th=[ 1871], 5.00th=[ 1972], 10.00th=[ 2022], 20.00th=[ 4245], 00:31:48.727 | 30.00th=[ 8557], 40.00th=[ 8792], 50.00th=[ 9060], 60.00th=[ 9194], 00:31:48.727 | 70.00th=[ 9597], 80.00th=[ 9866], 90.00th=[10134], 95.00th=[10402], 00:31:48.727 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:31:48.727 | 99.99th=[10537] 00:31:48.727 bw ( KiB/s): min= 2048, max=20480, per=0.41%, avg=11941.17, stdev=7483.68, samples=6 00:31:48.727 iops : min= 2, max= 20, avg=11.50, stdev= 7.20, samples=6 00:31:48.727 lat (msec) : 2=0.61%, 2000=7.36%, >=2000=92.02% 00:31:48.727 cpu : usr=0.00%, sys=1.13%, ctx=471, majf=0, minf=32769 00:31:48.727 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.8%, 32=19.6%, >=64=61.3% 00:31:48.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.727 complete : 0=0.0%, 4=97.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.7% 00:31:48.727 issued rwts: total=163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.727 job0: (groupid=0, jobs=1): err= 0: pid=1793141: Wed Jul 24 07:21:01 2024 00:31:48.727 read: IOPS=6, BW=6842KiB/s (7006kB/s)(72.0MiB/10776msec) 00:31:48.727 slat (usec): min=837, max=2124.8k, avg=139388.78, stdev=478645.95 00:31:48.727 clat (msec): min=739, max=10718, avg=2876.10, stdev=3127.68 00:31:48.727 lat (msec): min=837, max=10775, avg=3015.49, stdev=3252.27 00:31:48.727 clat percentiles (msec): 00:31:48.727 | 1.00th=[ 743], 5.00th=[ 852], 10.00th=[ 978], 20.00th=[ 1133], 00:31:48.728 | 30.00th=[ 1368], 40.00th=[ 1485], 50.00th=[ 1620], 60.00th=[ 1838], 00:31:48.728 | 70.00th=[ 1989], 80.00th=[ 2140], 90.00th=[10537], 95.00th=[10671], 00:31:48.728 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:31:48.728 | 99.99th=[10671] 00:31:48.728 lat (msec) : 750=1.39%, 1000=13.89%, 2000=58.33%, >=2000=26.39% 00:31:48.728 cpu : usr=0.01%, sys=0.39%, ctx=208, majf=0, minf=18433 00:31:48.728 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:31:48.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.728 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:31:48.728 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.728 job0: (groupid=0, jobs=1): err= 0: pid=1793142: Wed Jul 24 07:21:01 2024 00:31:48.728 read: IOPS=29, BW=30.0MiB/s (31.4MB/s)(302MiB/10079msec) 00:31:48.728 slat (usec): min=45, max=2145.4k, avg=33208.00, stdev=208972.05 00:31:48.728 clat (msec): min=47, max=8589, avg=3633.45, stdev=3540.92 00:31:48.728 lat (msec): min=82, max=8600, avg=3666.66, stdev=3549.41 00:31:48.728 clat percentiles (msec): 00:31:48.728 | 1.00th=[ 95], 5.00th=[ 140], 10.00th=[ 259], 20.00th=[ 489], 00:31:48.728 | 30.00th=[ 726], 40.00th=[ 969], 50.00th=[ 978], 60.00th=[ 5336], 00:31:48.728 | 70.00th=[ 7617], 80.00th=[ 7819], 90.00th=[ 8221], 95.00th=[ 8356], 00:31:48.728 | 99.00th=[ 8490], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:31:48.728 | 99.99th=[ 8557] 00:31:48.728 bw ( KiB/s): min= 2048, max=133120, per=2.54%, avg=74352.00, stdev=66576.16, samples=3 00:31:48.728 iops : min= 2, max= 130, avg=72.33, stdev=64.93, samples=3 00:31:48.728 lat (msec) : 50=0.33%, 100=1.66%, 250=7.62%, 500=10.93%, 750=10.60% 00:31:48.728 lat (msec) : 1000=22.52%, 2000=3.97%, >=2000=42.38% 00:31:48.728 cpu : usr=0.01%, sys=1.20%, ctx=626, majf=0, minf=32769 00:31:48.728 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.1% 00:31:48.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.728 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:31:48.728 issued rwts: total=302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.728 job0: (groupid=0, jobs=1): err= 0: pid=1793143: Wed Jul 24 07:21:01 2024 00:31:48.728 read: IOPS=57, BW=57.2MiB/s (60.0MB/s)(738MiB/12900msec) 00:31:48.728 slat (usec): min=43, max=2072.0k, avg=14616.89, stdev=149525.05 00:31:48.728 clat (msec): min=230, max=8930, avg=1891.19, stdev=3077.57 00:31:48.728 lat (msec): min=231, max=8938, avg=1905.81, stdev=3087.14 00:31:48.728 clat percentiles (msec): 00:31:48.728 | 1.00th=[ 247], 5.00th=[ 249], 10.00th=[ 257], 20.00th=[ 275], 00:31:48.728 | 30.00th=[ 321], 40.00th=[ 363], 50.00th=[ 405], 60.00th=[ 418], 00:31:48.728 | 70.00th=[ 464], 80.00th=[ 2702], 90.00th=[ 8658], 95.00th=[ 8792], 00:31:48.728 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:31:48.728 | 99.99th=[ 8926] 00:31:48.728 bw ( KiB/s): min= 2052, max=486451, per=6.09%, avg=178528.00, stdev=184624.84, samples=7 00:31:48.728 iops : min= 2, max= 475, avg=174.14, stdev=180.39, samples=7 00:31:48.728 lat (msec) : 250=6.78%, 500=68.83%, 750=1.36%, >=2000=23.04% 00:31:48.728 cpu : usr=0.05%, sys=1.05%, ctx=678, majf=0, minf=32769 00:31:48.728 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:31:48.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.728 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:31:48.728 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.728 job0: (groupid=0, jobs=1): err= 0: pid=1793144: Wed Jul 24 07:21:01 2024 00:31:48.728 read: IOPS=19, BW=19.8MiB/s (20.8MB/s)(254MiB/12810msec) 00:31:48.728 slat (usec): min=68, max=2079.4k, avg=42127.27, stdev=251426.90 00:31:48.728 clat (msec): min=1327, max=11184, avg=6031.88, stdev=3989.30 00:31:48.728 lat (msec): min=1329, max=11197, avg=6074.01, stdev=3989.37 00:31:48.728 clat percentiles (msec): 00:31:48.728 | 1.00th=[ 1334], 5.00th=[ 1385], 10.00th=[ 1435], 20.00th=[ 1552], 00:31:48.728 | 30.00th=[ 1636], 40.00th=[ 3373], 50.00th=[ 5067], 60.00th=[ 8356], 00:31:48.728 | 70.00th=[10537], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:31:48.728 | 99.00th=[11073], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:31:48.728 | 99.99th=[11208] 00:31:48.728 bw ( KiB/s): min= 2052, max=88064, per=1.27%, avg=37153.57, stdev=35007.09, samples=7 00:31:48.728 iops : min= 2, max= 86, avg=36.14, stdev=34.30, samples=7 00:31:48.728 lat (msec) : 2000=32.68%, >=2000=67.32% 00:31:48.728 cpu : usr=0.00%, sys=0.97%, ctx=448, majf=0, minf=32769 00:31:48.728 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.6%, >=64=75.2% 00:31:48.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.728 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:31:48.728 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.728 job0: (groupid=0, jobs=1): err= 0: pid=1793145: Wed Jul 24 07:21:01 2024 00:31:48.728 read: IOPS=24, BW=24.9MiB/s (26.1MB/s)(252MiB/10123msec) 00:31:48.728 slat (usec): min=64, max=2040.4k, avg=39797.89, stdev=215403.67 00:31:48.728 clat (msec): min=91, max=8432, avg=4702.89, stdev=3275.50 00:31:48.728 lat (msec): min=127, max=8442, avg=4742.68, stdev=3275.21 00:31:48.728 clat percentiles (msec): 00:31:48.728 | 1.00th=[ 134], 5.00th=[ 292], 10.00th=[ 498], 20.00th=[ 877], 00:31:48.728 | 30.00th=[ 1385], 40.00th=[ 3473], 50.00th=[ 5470], 60.00th=[ 6208], 00:31:48.728 | 70.00th=[ 8221], 80.00th=[ 8356], 90.00th=[ 8356], 95.00th=[ 8423], 00:31:48.728 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:31:48.728 | 99.99th=[ 8423] 00:31:48.728 bw ( KiB/s): min=14336, max=61440, per=1.06%, avg=30981.25, stdev=18646.87, samples=8 00:31:48.728 iops : min= 14, max= 60, avg=30.12, stdev=18.21, samples=8 00:31:48.728 lat (msec) : 100=0.40%, 250=3.57%, 500=6.35%, 750=5.56%, 1000=6.75% 00:31:48.728 lat (msec) : 2000=11.51%, >=2000=65.87% 00:31:48.728 cpu : usr=0.03%, sys=1.06%, ctx=546, majf=0, minf=32769 00:31:48.728 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.7%, >=64=75.0% 00:31:48.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.728 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:31:48.728 issued rwts: total=252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.728 job0: (groupid=0, jobs=1): err= 0: pid=1793146: Wed Jul 24 07:21:01 2024 00:31:48.728 read: IOPS=26, BW=26.9MiB/s (28.2MB/s)(271MiB/10084msec) 00:31:48.728 slat (usec): min=44, max=2107.5k, avg=37040.63, stdev=224471.33 00:31:48.728 clat (msec): min=44, max=7032, avg=2060.88, stdev=1753.65 00:31:48.728 lat (msec): min=142, max=7849, avg=2097.92, stdev=1799.42 00:31:48.728 clat percentiles (msec): 00:31:48.728 | 1.00th=[ 146], 5.00th=[ 296], 10.00th=[ 426], 20.00th=[ 443], 00:31:48.728 | 30.00th=[ 659], 40.00th=[ 1284], 50.00th=[ 1888], 60.00th=[ 2433], 00:31:48.728 | 70.00th=[ 2802], 80.00th=[ 2869], 90.00th=[ 2937], 95.00th=[ 6745], 00:31:48.728 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:31:48.728 | 99.99th=[ 7013] 00:31:48.728 bw ( KiB/s): min=36790, max=149504, per=2.48%, avg=72737.00, stdev=51875.03, samples=4 00:31:48.728 iops : min= 35, max= 146, avg=70.75, stdev=50.89, samples=4 00:31:48.728 lat (msec) : 50=0.37%, 250=2.58%, 500=23.25%, 750=5.90%, 1000=4.80% 00:31:48.728 lat (msec) : 2000=14.39%, >=2000=48.71% 00:31:48.728 cpu : usr=0.00%, sys=0.83%, ctx=398, majf=0, minf=32769 00:31:48.728 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.8%, >=64=76.8% 00:31:48.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.728 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:31:48.728 issued rwts: total=271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.728 job0: (groupid=0, jobs=1): err= 0: pid=1793147: Wed Jul 24 07:21:01 2024 00:31:48.728 read: IOPS=24, BW=24.2MiB/s (25.4MB/s)(312MiB/12901msec) 00:31:48.728 slat (usec): min=70, max=2151.5k, avg=34605.96, stdev=207910.21 00:31:48.728 clat (msec): min=1579, max=8514, avg=4918.29, stdev=1673.00 00:31:48.728 lat (msec): min=1594, max=10630, avg=4952.90, stdev=1697.27 00:31:48.728 clat percentiles (msec): 00:31:48.728 | 1.00th=[ 1603], 5.00th=[ 1737], 10.00th=[ 1921], 20.00th=[ 4245], 00:31:48.728 | 30.00th=[ 4329], 40.00th=[ 4866], 50.00th=[ 5470], 60.00th=[ 5940], 00:31:48.728 | 70.00th=[ 6074], 80.00th=[ 6275], 90.00th=[ 6477], 95.00th=[ 6544], 00:31:48.728 | 99.00th=[ 8154], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:31:48.728 | 99.99th=[ 8490] 00:31:48.728 bw ( KiB/s): min= 2048, max=86016, per=1.43%, avg=42082.22, stdev=31843.92, samples=9 00:31:48.728 iops : min= 2, max= 84, avg=41.00, stdev=31.00, samples=9 00:31:48.728 lat (msec) : 2000=14.10%, >=2000=85.90% 00:31:48.728 cpu : usr=0.02%, sys=0.97%, ctx=548, majf=0, minf=32769 00:31:48.728 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.3%, >=64=79.8% 00:31:48.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.728 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:31:48.728 issued rwts: total=312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.728 job0: (groupid=0, jobs=1): err= 0: pid=1793148: Wed Jul 24 07:21:01 2024 00:31:48.728 read: IOPS=133, BW=134MiB/s (140MB/s)(1351MiB/10084msec) 00:31:48.728 slat (usec): min=56, max=1170.3k, avg=7399.11, stdev=34022.04 00:31:48.728 clat (msec): min=79, max=1904, avg=901.78, stdev=312.68 00:31:48.728 lat (msec): min=194, max=1945, avg=909.18, stdev=312.34 00:31:48.728 clat percentiles (msec): 00:31:48.728 | 1.00th=[ 207], 5.00th=[ 617], 10.00th=[ 676], 20.00th=[ 701], 00:31:48.728 | 30.00th=[ 735], 40.00th=[ 793], 50.00th=[ 818], 60.00th=[ 860], 00:31:48.728 | 70.00th=[ 953], 80.00th=[ 1036], 90.00th=[ 1099], 95.00th=[ 1720], 00:31:48.728 | 99.00th=[ 1871], 99.50th=[ 1871], 99.90th=[ 1905], 99.95th=[ 1905], 00:31:48.728 | 99.99th=[ 1905] 00:31:48.728 bw ( KiB/s): min= 2048, max=249856, per=4.74%, avg=139107.33, stdev=63126.05, samples=18 00:31:48.728 iops : min= 2, max= 244, avg=135.67, stdev=61.69, samples=18 00:31:48.729 lat (msec) : 100=0.07%, 250=1.11%, 500=1.70%, 750=28.94%, 1000=43.75% 00:31:48.729 lat (msec) : 2000=24.43% 00:31:48.729 cpu : usr=0.05%, sys=2.48%, ctx=1294, majf=0, minf=32769 00:31:48.729 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:31:48.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.729 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.729 issued rwts: total=1351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.729 job0: (groupid=0, jobs=1): err= 0: pid=1793149: Wed Jul 24 07:21:01 2024 00:31:48.729 read: IOPS=20, BW=20.9MiB/s (21.9MB/s)(224MiB/10707msec) 00:31:48.729 slat (usec): min=57, max=2150.3k, avg=44658.81, stdev=240772.56 00:31:48.729 clat (msec): min=701, max=8620, avg=2259.51, stdev=1659.19 00:31:48.729 lat (msec): min=713, max=8626, avg=2304.16, stdev=1733.83 00:31:48.729 clat percentiles (msec): 00:31:48.729 | 1.00th=[ 718], 5.00th=[ 743], 10.00th=[ 927], 20.00th=[ 1250], 00:31:48.729 | 30.00th=[ 1586], 40.00th=[ 2072], 50.00th=[ 2165], 60.00th=[ 2232], 00:31:48.729 | 70.00th=[ 2265], 80.00th=[ 2299], 90.00th=[ 2635], 95.00th=[ 8423], 00:31:48.729 | 99.00th=[ 8490], 99.50th=[ 8557], 99.90th=[ 8658], 99.95th=[ 8658], 00:31:48.729 | 99.99th=[ 8658] 00:31:48.729 bw ( KiB/s): min=18432, max=69033, per=1.55%, avg=45403.75, stdev=25645.14, samples=4 00:31:48.729 iops : min= 18, max= 67, avg=44.00, stdev=25.13, samples=4 00:31:48.729 lat (msec) : 750=5.36%, 1000=8.48%, 2000=24.55%, >=2000=61.61% 00:31:48.729 cpu : usr=0.00%, sys=0.70%, ctx=438, majf=0, minf=32769 00:31:48.729 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.3%, >=64=71.9% 00:31:48.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.729 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:31:48.729 issued rwts: total=224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.729 job0: (groupid=0, jobs=1): err= 0: pid=1793150: Wed Jul 24 07:21:01 2024 00:31:48.729 read: IOPS=16, BW=16.9MiB/s (17.7MB/s)(216MiB/12813msec) 00:31:48.729 slat (usec): min=414, max=2138.5k, avg=49606.73, stdev=282356.07 00:31:48.729 clat (msec): min=1131, max=11660, avg=7036.65, stdev=4500.56 00:31:48.729 lat (msec): min=1162, max=11674, avg=7086.26, stdev=4493.08 00:31:48.729 clat percentiles (msec): 00:31:48.729 | 1.00th=[ 1167], 5.00th=[ 1200], 10.00th=[ 1267], 20.00th=[ 1401], 00:31:48.729 | 30.00th=[ 1452], 40.00th=[ 5470], 50.00th=[10671], 60.00th=[10805], 00:31:48.729 | 70.00th=[11073], 80.00th=[11208], 90.00th=[11342], 95.00th=[11476], 00:31:48.729 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:31:48.729 | 99.99th=[11610] 00:31:48.729 bw ( KiB/s): min= 2052, max=118784, per=1.04%, avg=30374.50, stdev=44219.74, samples=6 00:31:48.729 iops : min= 2, max= 116, avg=29.50, stdev=43.26, samples=6 00:31:48.729 lat (msec) : 2000=33.33%, >=2000=66.67% 00:31:48.729 cpu : usr=0.01%, sys=0.66%, ctx=450, majf=0, minf=32769 00:31:48.729 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.4%, 32=14.8%, >=64=70.8% 00:31:48.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.729 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:31:48.729 issued rwts: total=216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.729 job0: (groupid=0, jobs=1): err= 0: pid=1793151: Wed Jul 24 07:21:01 2024 00:31:48.729 read: IOPS=22, BW=22.2MiB/s (23.3MB/s)(241MiB/10841msec) 00:31:48.729 slat (usec): min=51, max=1939.7k, avg=44973.70, stdev=236668.14 00:31:48.729 clat (usec): min=1015, max=9492.2k, avg=5369070.09, stdev=2770607.65 00:31:48.729 lat (msec): min=1655, max=9502, avg=5414.04, stdev=2759.49 00:31:48.729 clat percentiles (msec): 00:31:48.729 | 1.00th=[ 1670], 5.00th=[ 1838], 10.00th=[ 1989], 20.00th=[ 2165], 00:31:48.729 | 30.00th=[ 2232], 40.00th=[ 3943], 50.00th=[ 5671], 60.00th=[ 6342], 00:31:48.729 | 70.00th=[ 7752], 80.00th=[ 8490], 90.00th=[ 8926], 95.00th=[ 9194], 00:31:48.729 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:31:48.729 | 99.99th=[ 9463] 00:31:48.729 bw ( KiB/s): min=10240, max=59392, per=0.99%, avg=28920.25, stdev=15341.90, samples=8 00:31:48.729 iops : min= 10, max= 58, avg=28.12, stdev=14.97, samples=8 00:31:48.729 lat (msec) : 2=0.41%, 2000=10.37%, >=2000=89.21% 00:31:48.729 cpu : usr=0.00%, sys=1.08%, ctx=593, majf=0, minf=32769 00:31:48.729 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.6%, 32=13.3%, >=64=73.9% 00:31:48.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.729 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:31:48.729 issued rwts: total=241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.729 job0: (groupid=0, jobs=1): err= 0: pid=1793152: Wed Jul 24 07:21:01 2024 00:31:48.729 read: IOPS=17, BW=18.0MiB/s (18.9MB/s)(195MiB/10845msec) 00:31:48.729 slat (usec): min=74, max=2127.9k, avg=51278.48, stdev=260522.27 00:31:48.729 clat (msec): min=843, max=9835, avg=6496.53, stdev=3480.03 00:31:48.729 lat (msec): min=851, max=9838, avg=6547.81, stdev=3463.42 00:31:48.729 clat percentiles (msec): 00:31:48.729 | 1.00th=[ 852], 5.00th=[ 1250], 10.00th=[ 1368], 20.00th=[ 1737], 00:31:48.729 | 30.00th=[ 4077], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:31:48.729 | 70.00th=[ 9463], 80.00th=[ 9463], 90.00th=[ 9597], 95.00th=[ 9731], 00:31:48.729 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:31:48.729 | 99.99th=[ 9866] 00:31:48.729 bw ( KiB/s): min=10240, max=36790, per=0.66%, avg=19365.29, stdev=9229.36, samples=7 00:31:48.729 iops : min= 10, max= 35, avg=18.71, stdev= 8.69, samples=7 00:31:48.729 lat (msec) : 1000=2.05%, 2000=23.59%, >=2000=74.36% 00:31:48.729 cpu : usr=0.00%, sys=0.88%, ctx=429, majf=0, minf=32769 00:31:48.729 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.1%, 16=8.2%, 32=16.4%, >=64=67.7% 00:31:48.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.729 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:31:48.729 issued rwts: total=195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.729 job1: (groupid=0, jobs=1): err= 0: pid=1793153: Wed Jul 24 07:21:01 2024 00:31:48.729 read: IOPS=5, BW=5172KiB/s (5296kB/s)(65.0MiB/12869msec) 00:31:48.729 slat (usec): min=610, max=2090.3k, avg=165530.20, stdev=552539.65 00:31:48.729 clat (msec): min=2108, max=12865, avg=9653.47, stdev=3310.19 00:31:48.729 lat (msec): min=4167, max=12868, avg=9819.00, stdev=3193.99 00:31:48.729 clat percentiles (msec): 00:31:48.729 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:31:48.729 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12684], 00:31:48.729 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:31:48.729 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:31:48.729 | 99.99th=[12818] 00:31:48.729 lat (msec) : >=2000=100.00% 00:31:48.729 cpu : usr=0.00%, sys=0.50%, ctx=75, majf=0, minf=16641 00:31:48.729 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:31:48.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.729 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:31:48.729 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.729 job1: (groupid=0, jobs=1): err= 0: pid=1793154: Wed Jul 24 07:21:01 2024 00:31:48.729 read: IOPS=22, BW=22.1MiB/s (23.2MB/s)(283MiB/12814msec) 00:31:48.729 slat (usec): min=59, max=2144.9k, avg=37814.89, stdev=246441.33 00:31:48.729 clat (msec): min=714, max=12725, avg=5410.02, stdev=4739.18 00:31:48.729 lat (msec): min=716, max=12813, avg=5447.83, stdev=4743.88 00:31:48.729 clat percentiles (msec): 00:31:48.729 | 1.00th=[ 718], 5.00th=[ 776], 10.00th=[ 835], 20.00th=[ 885], 00:31:48.729 | 30.00th=[ 953], 40.00th=[ 1045], 50.00th=[ 2869], 60.00th=[ 8490], 00:31:48.729 | 70.00th=[10805], 80.00th=[10939], 90.00th=[11208], 95.00th=[11208], 00:31:48.729 | 99.00th=[11208], 99.50th=[11208], 99.90th=[12684], 99.95th=[12684], 00:31:48.729 | 99.99th=[12684] 00:31:48.729 bw ( KiB/s): min= 2052, max=184320, per=1.56%, avg=45638.14, stdev=64252.73, samples=7 00:31:48.729 iops : min= 2, max= 180, avg=44.43, stdev=62.83, samples=7 00:31:48.729 lat (msec) : 750=2.47%, 1000=31.10%, 2000=15.19%, >=2000=51.24% 00:31:48.729 cpu : usr=0.02%, sys=0.76%, ctx=365, majf=0, minf=32769 00:31:48.729 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.3%, >=64=77.7% 00:31:48.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.729 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:31:48.729 issued rwts: total=283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.729 job1: (groupid=0, jobs=1): err= 0: pid=1793155: Wed Jul 24 07:21:01 2024 00:31:48.729 read: IOPS=11, BW=11.6MiB/s (12.2MB/s)(117MiB/10089msec) 00:31:48.729 slat (usec): min=455, max=2100.6k, avg=85741.63, stdev=347170.72 00:31:48.729 clat (msec): min=55, max=10085, avg=3454.06, stdev=3295.76 00:31:48.729 lat (msec): min=102, max=10088, avg=3539.80, stdev=3336.84 00:31:48.729 clat percentiles (msec): 00:31:48.729 | 1.00th=[ 103], 5.00th=[ 192], 10.00th=[ 342], 20.00th=[ 693], 00:31:48.729 | 30.00th=[ 1028], 40.00th=[ 1334], 50.00th=[ 3138], 60.00th=[ 3272], 00:31:48.729 | 70.00th=[ 3440], 80.00th=[ 5738], 90.00th=[ 9866], 95.00th=[10000], 00:31:48.729 | 99.00th=[10000], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:31:48.729 | 99.99th=[10134] 00:31:48.729 lat (msec) : 100=0.85%, 250=5.13%, 500=5.98%, 750=10.26%, 1000=7.69% 00:31:48.729 lat (msec) : 2000=13.68%, >=2000=56.41% 00:31:48.729 cpu : usr=0.01%, sys=0.72%, ctx=355, majf=0, minf=29953 00:31:48.729 IO depths : 1=0.9%, 2=1.7%, 4=3.4%, 8=6.8%, 16=13.7%, 32=27.4%, >=64=46.2% 00:31:48.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.729 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:31:48.729 issued rwts: total=117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.729 job1: (groupid=0, jobs=1): err= 0: pid=1793156: Wed Jul 24 07:21:01 2024 00:31:48.729 read: IOPS=7, BW=7405KiB/s (7583kB/s)(73.0MiB/10095msec) 00:31:48.729 slat (usec): min=415, max=2123.9k, avg=137044.89, stdev=473435.15 00:31:48.729 clat (msec): min=89, max=10092, avg=4031.42, stdev=4286.64 00:31:48.729 lat (msec): min=94, max=10094, avg=4168.46, stdev=4318.67 00:31:48.730 clat percentiles (msec): 00:31:48.730 | 1.00th=[ 90], 5.00th=[ 101], 10.00th=[ 188], 20.00th=[ 218], 00:31:48.730 | 30.00th=[ 447], 40.00th=[ 1070], 50.00th=[ 1267], 60.00th=[ 3540], 00:31:48.730 | 70.00th=[ 7886], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:31:48.730 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:31:48.730 | 99.99th=[10134] 00:31:48.730 lat (msec) : 100=5.48%, 250=16.44%, 500=9.59%, 750=5.48%, 1000=2.74% 00:31:48.730 lat (msec) : 2000=19.18%, >=2000=41.10% 00:31:48.730 cpu : usr=0.00%, sys=0.57%, ctx=248, majf=0, minf=18689 00:31:48.730 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:31:48.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.730 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:31:48.730 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.730 job1: (groupid=0, jobs=1): err= 0: pid=1793157: Wed Jul 24 07:21:01 2024 00:31:48.730 read: IOPS=2, BW=2161KiB/s (2213kB/s)(27.0MiB/12796msec) 00:31:48.730 slat (usec): min=1370, max=2136.2k, avg=395970.17, stdev=811338.81 00:31:48.730 clat (msec): min=2104, max=12788, avg=9730.67, stdev=3595.76 00:31:48.730 lat (msec): min=4163, max=12795, avg=10126.64, stdev=3300.13 00:31:48.730 clat percentiles (msec): 00:31:48.730 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 6342], 00:31:48.730 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[12684], 00:31:48.730 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:31:48.730 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:31:48.730 | 99.99th=[12818] 00:31:48.730 lat (msec) : >=2000=100.00% 00:31:48.730 cpu : usr=0.00%, sys=0.16%, ctx=72, majf=0, minf=6913 00:31:48.730 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:31:48.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.730 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:31:48.730 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.730 job1: (groupid=0, jobs=1): err= 0: pid=1793158: Wed Jul 24 07:21:01 2024 00:31:48.730 read: IOPS=38, BW=38.7MiB/s (40.6MB/s)(390MiB/10065msec) 00:31:48.730 slat (usec): min=45, max=2078.5k, avg=25636.33, stdev=177390.47 00:31:48.730 clat (msec): min=64, max=7398, avg=1291.91, stdev=1365.58 00:31:48.730 lat (msec): min=67, max=7475, avg=1317.55, stdev=1409.25 00:31:48.730 clat percentiles (msec): 00:31:48.730 | 1.00th=[ 71], 5.00th=[ 309], 10.00th=[ 625], 20.00th=[ 693], 00:31:48.730 | 30.00th=[ 735], 40.00th=[ 760], 50.00th=[ 785], 60.00th=[ 969], 00:31:48.730 | 70.00th=[ 1469], 80.00th=[ 1586], 90.00th=[ 1703], 95.00th=[ 3540], 00:31:48.730 | 99.00th=[ 7416], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:31:48.730 | 99.99th=[ 7416] 00:31:48.730 bw ( KiB/s): min= 8192, max=190464, per=3.03%, avg=88756.00, stdev=71714.04, samples=6 00:31:48.730 iops : min= 8, max= 186, avg=86.50, stdev=70.06, samples=6 00:31:48.730 lat (msec) : 100=1.54%, 250=3.33%, 500=3.08%, 750=26.15%, 1000=25.90% 00:31:48.730 lat (msec) : 2000=33.85%, >=2000=6.15% 00:31:48.730 cpu : usr=0.00%, sys=1.08%, ctx=610, majf=0, minf=32769 00:31:48.730 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:31:48.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.730 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:31:48.730 issued rwts: total=390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.730 job1: (groupid=0, jobs=1): err= 0: pid=1793159: Wed Jul 24 07:21:01 2024 00:31:48.730 read: IOPS=25, BW=25.6MiB/s (26.9MB/s)(330MiB/12878msec) 00:31:48.730 slat (usec): min=72, max=2038.2k, avg=32658.94, stdev=168454.77 00:31:48.730 clat (msec): min=1103, max=8846, avg=4287.30, stdev=2875.38 00:31:48.730 lat (msec): min=1179, max=8848, avg=4319.96, stdev=2875.45 00:31:48.730 clat percentiles (msec): 00:31:48.730 | 1.00th=[ 1183], 5.00th=[ 1200], 10.00th=[ 1284], 20.00th=[ 1435], 00:31:48.730 | 30.00th=[ 1603], 40.00th=[ 2970], 50.00th=[ 3138], 60.00th=[ 4010], 00:31:48.730 | 70.00th=[ 6342], 80.00th=[ 8087], 90.00th=[ 8658], 95.00th=[ 8792], 00:31:48.730 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:31:48.730 | 99.99th=[ 8792] 00:31:48.730 bw ( KiB/s): min= 2052, max=118784, per=1.42%, avg=41570.30, stdev=38559.69, samples=10 00:31:48.730 iops : min= 2, max= 116, avg=40.50, stdev=37.71, samples=10 00:31:48.730 lat (msec) : 2000=35.76%, >=2000=64.24% 00:31:48.730 cpu : usr=0.01%, sys=0.83%, ctx=704, majf=0, minf=32769 00:31:48.730 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.7%, >=64=80.9% 00:31:48.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.730 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:31:48.730 issued rwts: total=330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.730 job1: (groupid=0, jobs=1): err= 0: pid=1793160: Wed Jul 24 07:21:01 2024 00:31:48.730 read: IOPS=26, BW=26.0MiB/s (27.3MB/s)(333MiB/12807msec) 00:31:48.730 slat (usec): min=51, max=2113.7k, avg=32117.20, stdev=208718.84 00:31:48.730 clat (msec): min=698, max=11147, avg=4583.84, stdev=4402.74 00:31:48.730 lat (msec): min=700, max=11149, avg=4615.96, stdev=4411.99 00:31:48.730 clat percentiles (msec): 00:31:48.730 | 1.00th=[ 701], 5.00th=[ 701], 10.00th=[ 701], 20.00th=[ 709], 00:31:48.730 | 30.00th=[ 718], 40.00th=[ 768], 50.00th=[ 1183], 60.00th=[ 4077], 00:31:48.730 | 70.00th=[ 7617], 80.00th=[10805], 90.00th=[10939], 95.00th=[11073], 00:31:48.730 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:31:48.730 | 99.99th=[11208] 00:31:48.730 bw ( KiB/s): min= 2052, max=176128, per=1.80%, avg=52728.25, stdev=62302.07, samples=8 00:31:48.730 iops : min= 2, max= 172, avg=51.38, stdev=60.89, samples=8 00:31:48.730 lat (msec) : 750=32.13%, 1000=15.32%, 2000=3.30%, >=2000=49.25% 00:31:48.730 cpu : usr=0.00%, sys=1.08%, ctx=460, majf=0, minf=32769 00:31:48.730 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.1% 00:31:48.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.730 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:31:48.730 issued rwts: total=333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.730 job1: (groupid=0, jobs=1): err= 0: pid=1793161: Wed Jul 24 07:21:01 2024 00:31:48.730 read: IOPS=28, BW=28.0MiB/s (29.4MB/s)(283MiB/10093msec) 00:31:48.730 slat (usec): min=75, max=2087.0k, avg=35434.07, stdev=181996.55 00:31:48.730 clat (msec): min=62, max=7032, avg=3992.68, stdev=2471.65 00:31:48.730 lat (msec): min=96, max=7035, avg=4028.12, stdev=2467.95 00:31:48.730 clat percentiles (msec): 00:31:48.730 | 1.00th=[ 101], 5.00th=[ 443], 10.00th=[ 634], 20.00th=[ 1318], 00:31:48.730 | 30.00th=[ 2534], 40.00th=[ 2601], 50.00th=[ 2735], 60.00th=[ 6275], 00:31:48.730 | 70.00th=[ 6544], 80.00th=[ 6745], 90.00th=[ 6812], 95.00th=[ 6946], 00:31:48.730 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:31:48.730 | 99.99th=[ 7013] 00:31:48.730 bw ( KiB/s): min= 4096, max=69632, per=1.34%, avg=39393.00, stdev=25776.70, samples=8 00:31:48.730 iops : min= 4, max= 68, avg=38.25, stdev=25.25, samples=8 00:31:48.730 lat (msec) : 100=0.71%, 250=2.12%, 500=4.24%, 750=5.65%, 1000=3.89% 00:31:48.730 lat (msec) : 2000=4.24%, >=2000=79.15% 00:31:48.730 cpu : usr=0.02%, sys=1.10%, ctx=748, majf=0, minf=32769 00:31:48.730 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.3%, >=64=77.7% 00:31:48.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.730 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:31:48.730 issued rwts: total=283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.730 job1: (groupid=0, jobs=1): err= 0: pid=1793162: Wed Jul 24 07:21:01 2024 00:31:48.730 read: IOPS=70, BW=70.7MiB/s (74.1MB/s)(714MiB/10106msec) 00:31:48.730 slat (usec): min=45, max=1889.5k, avg=14023.76, stdev=93575.91 00:31:48.730 clat (msec): min=88, max=3920, avg=1435.70, stdev=1054.90 00:31:48.730 lat (msec): min=157, max=3925, avg=1449.72, stdev=1057.97 00:31:48.730 clat percentiles (msec): 00:31:48.730 | 1.00th=[ 313], 5.00th=[ 835], 10.00th=[ 844], 20.00th=[ 844], 00:31:48.730 | 30.00th=[ 852], 40.00th=[ 869], 50.00th=[ 894], 60.00th=[ 953], 00:31:48.730 | 70.00th=[ 1003], 80.00th=[ 2970], 90.00th=[ 3507], 95.00th=[ 3708], 00:31:48.730 | 99.00th=[ 3876], 99.50th=[ 3910], 99.90th=[ 3910], 99.95th=[ 3910], 00:31:48.730 | 99.99th=[ 3910] 00:31:48.730 bw ( KiB/s): min= 2048, max=159744, per=3.14%, avg=92105.92, stdev=65485.62, samples=13 00:31:48.730 iops : min= 2, max= 156, avg=89.77, stdev=64.01, samples=13 00:31:48.730 lat (msec) : 100=0.14%, 250=0.56%, 500=1.12%, 750=1.40%, 1000=66.53% 00:31:48.730 lat (msec) : 2000=8.26%, >=2000=21.99% 00:31:48.730 cpu : usr=0.02%, sys=1.43%, ctx=848, majf=0, minf=32769 00:31:48.730 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:31:48.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.730 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:31:48.730 issued rwts: total=714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.730 job1: (groupid=0, jobs=1): err= 0: pid=1793163: Wed Jul 24 07:21:01 2024 00:31:48.730 read: IOPS=13, BW=13.5MiB/s (14.2MB/s)(145MiB/10727msec) 00:31:48.730 slat (usec): min=512, max=2110.8k, avg=73959.32, stdev=316639.61 00:31:48.730 clat (usec): min=1746, max=10607k, avg=5566513.55, stdev=3019478.00 00:31:48.730 lat (msec): min=1113, max=10614, avg=5640.47, stdev=3013.27 00:31:48.730 clat percentiles (msec): 00:31:48.730 | 1.00th=[ 1116], 5.00th=[ 1301], 10.00th=[ 1435], 20.00th=[ 1620], 00:31:48.730 | 30.00th=[ 5201], 40.00th=[ 5403], 50.00th=[ 5738], 60.00th=[ 6074], 00:31:48.730 | 70.00th=[ 6342], 80.00th=[ 9329], 90.00th=[ 9463], 95.00th=[10537], 00:31:48.731 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:31:48.731 | 99.99th=[10671] 00:31:48.731 bw ( KiB/s): min= 8159, max=26624, per=0.59%, avg=17391.50, stdev=13056.73, samples=2 00:31:48.731 iops : min= 7, max= 26, avg=16.50, stdev=13.44, samples=2 00:31:48.731 lat (msec) : 2=0.69%, 2000=22.76%, >=2000=76.55% 00:31:48.731 cpu : usr=0.00%, sys=0.73%, ctx=393, majf=0, minf=32769 00:31:48.731 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.5%, 16=11.0%, 32=22.1%, >=64=56.6% 00:31:48.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.731 complete : 0=0.0%, 4=94.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.3% 00:31:48.731 issued rwts: total=145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.731 job1: (groupid=0, jobs=1): err= 0: pid=1793164: Wed Jul 24 07:21:01 2024 00:31:48.731 read: IOPS=38, BW=38.9MiB/s (40.8MB/s)(502MiB/12896msec) 00:31:48.731 slat (usec): min=43, max=2090.8k, avg=21490.84, stdev=155226.99 00:31:48.731 clat (msec): min=414, max=8517, avg=2964.69, stdev=2206.33 00:31:48.731 lat (msec): min=415, max=10005, avg=2986.18, stdev=2215.74 00:31:48.731 clat percentiles (msec): 00:31:48.731 | 1.00th=[ 418], 5.00th=[ 422], 10.00th=[ 451], 20.00th=[ 718], 00:31:48.731 | 30.00th=[ 1351], 40.00th=[ 2265], 50.00th=[ 2500], 60.00th=[ 2903], 00:31:48.731 | 70.00th=[ 3037], 80.00th=[ 6409], 90.00th=[ 6678], 95.00th=[ 6812], 00:31:48.731 | 99.00th=[ 6946], 99.50th=[ 7013], 99.90th=[ 8490], 99.95th=[ 8490], 00:31:48.731 | 99.99th=[ 8490] 00:31:48.731 bw ( KiB/s): min= 2048, max=247313, per=2.38%, avg=69770.18, stdev=89279.01, samples=11 00:31:48.731 iops : min= 2, max= 241, avg=68.00, stdev=87.14, samples=11 00:31:48.731 lat (msec) : 500=11.35%, 750=10.96%, 1000=2.59%, 2000=7.97%, >=2000=67.13% 00:31:48.731 cpu : usr=0.02%, sys=1.19%, ctx=619, majf=0, minf=32769 00:31:48.731 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.5% 00:31:48.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.731 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:31:48.731 issued rwts: total=502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.731 job1: (groupid=0, jobs=1): err= 0: pid=1793165: Wed Jul 24 07:21:01 2024 00:31:48.731 read: IOPS=78, BW=78.2MiB/s (82.0MB/s)(788MiB/10075msec) 00:31:48.731 slat (usec): min=55, max=2105.1k, avg=12698.51, stdev=105699.54 00:31:48.731 clat (msec): min=62, max=5414, avg=1564.09, stdev=1588.76 00:31:48.731 lat (msec): min=92, max=5657, avg=1576.78, stdev=1597.38 00:31:48.731 clat percentiles (msec): 00:31:48.731 | 1.00th=[ 213], 5.00th=[ 430], 10.00th=[ 439], 20.00th=[ 575], 00:31:48.731 | 30.00th=[ 701], 40.00th=[ 718], 50.00th=[ 1062], 60.00th=[ 1099], 00:31:48.731 | 70.00th=[ 1318], 80.00th=[ 1653], 90.00th=[ 5000], 95.00th=[ 5201], 00:31:48.731 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:31:48.731 | 99.99th=[ 5403] 00:31:48.731 bw ( KiB/s): min= 4096, max=282624, per=3.97%, avg=116347.64, stdev=72378.82, samples=11 00:31:48.731 iops : min= 4, max= 276, avg=113.55, stdev=70.72, samples=11 00:31:48.731 lat (msec) : 100=0.25%, 250=1.52%, 500=13.32%, 750=29.70%, 1000=4.95% 00:31:48.731 lat (msec) : 2000=33.63%, >=2000=16.62% 00:31:48.731 cpu : usr=0.04%, sys=2.03%, ctx=831, majf=0, minf=32769 00:31:48.731 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:31:48.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.731 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:31:48.731 issued rwts: total=788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.731 job2: (groupid=0, jobs=1): err= 0: pid=1793166: Wed Jul 24 07:21:01 2024 00:31:48.731 read: IOPS=25, BW=25.3MiB/s (26.6MB/s)(255MiB/10063msec) 00:31:48.731 slat (usec): min=118, max=2071.0k, avg=39216.02, stdev=217297.81 00:31:48.731 clat (msec): min=61, max=9999, avg=4629.35, stdev=3278.37 00:31:48.731 lat (msec): min=92, max=10000, avg=4668.57, stdev=3275.95 00:31:48.731 clat percentiles (msec): 00:31:48.731 | 1.00th=[ 96], 5.00th=[ 255], 10.00th=[ 651], 20.00th=[ 1200], 00:31:48.731 | 30.00th=[ 1670], 40.00th=[ 1737], 50.00th=[ 3675], 60.00th=[ 7684], 00:31:48.731 | 70.00th=[ 7886], 80.00th=[ 8020], 90.00th=[ 8221], 95.00th=[ 8356], 00:31:48.731 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:31:48.731 | 99.99th=[10000] 00:31:48.731 bw ( KiB/s): min=10219, max=63488, per=1.11%, avg=32432.75, stdev=18429.27, samples=8 00:31:48.731 iops : min= 9, max= 62, avg=31.38, stdev=18.11, samples=8 00:31:48.731 lat (msec) : 100=1.18%, 250=3.53%, 500=3.53%, 750=5.10%, 1000=4.31% 00:31:48.731 lat (msec) : 2000=24.31%, >=2000=58.04% 00:31:48.731 cpu : usr=0.02%, sys=1.06%, ctx=605, majf=0, minf=32769 00:31:48.731 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.5%, >=64=75.3% 00:31:48.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.731 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:31:48.731 issued rwts: total=255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.731 job2: (groupid=0, jobs=1): err= 0: pid=1793167: Wed Jul 24 07:21:01 2024 00:31:48.731 read: IOPS=18, BW=18.1MiB/s (19.0MB/s)(182MiB/10068msec) 00:31:48.731 slat (usec): min=438, max=2129.5k, avg=55014.38, stdev=253850.68 00:31:48.731 clat (msec): min=54, max=8988, avg=5203.46, stdev=3752.39 00:31:48.731 lat (msec): min=83, max=9007, avg=5258.48, stdev=3748.76 00:31:48.731 clat percentiles (msec): 00:31:48.731 | 1.00th=[ 84], 5.00th=[ 129], 10.00th=[ 380], 20.00th=[ 634], 00:31:48.731 | 30.00th=[ 1234], 40.00th=[ 3440], 50.00th=[ 7953], 60.00th=[ 8356], 00:31:48.731 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 8792], 95.00th=[ 8926], 00:31:48.731 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:31:48.731 | 99.99th=[ 8926] 00:31:48.731 bw ( KiB/s): min=12263, max=61440, per=1.26%, avg=36855.67, stdev=24588.50, samples=3 00:31:48.731 iops : min= 11, max= 60, avg=35.67, stdev=24.50, samples=3 00:31:48.731 lat (msec) : 100=1.10%, 250=7.14%, 500=6.04%, 750=8.24%, 1000=3.30% 00:31:48.731 lat (msec) : 2000=11.54%, >=2000=62.64% 00:31:48.731 cpu : usr=0.01%, sys=0.92%, ctx=625, majf=0, minf=32769 00:31:48.731 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.4%, 16=8.8%, 32=17.6%, >=64=65.4% 00:31:48.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.731 complete : 0=0.0%, 4=98.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.8% 00:31:48.731 issued rwts: total=182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.731 job2: (groupid=0, jobs=1): err= 0: pid=1793168: Wed Jul 24 07:21:01 2024 00:31:48.731 read: IOPS=136, BW=137MiB/s (143MB/s)(1388MiB/10144msec) 00:31:48.731 slat (usec): min=63, max=106394, avg=7222.71, stdev=19080.55 00:31:48.731 clat (msec): min=108, max=1221, avg=894.90, stdev=174.13 00:31:48.731 lat (msec): min=214, max=1224, avg=902.13, stdev=174.90 00:31:48.731 clat percentiles (msec): 00:31:48.731 | 1.00th=[ 241], 5.00th=[ 701], 10.00th=[ 709], 20.00th=[ 751], 00:31:48.731 | 30.00th=[ 835], 40.00th=[ 844], 50.00th=[ 885], 60.00th=[ 919], 00:31:48.731 | 70.00th=[ 986], 80.00th=[ 1062], 90.00th=[ 1116], 95.00th=[ 1150], 00:31:48.731 | 99.00th=[ 1217], 99.50th=[ 1217], 99.90th=[ 1217], 99.95th=[ 1217], 00:31:48.731 | 99.99th=[ 1217] 00:31:48.731 bw ( KiB/s): min=67584, max=188416, per=4.63%, avg=135814.74, stdev=29655.19, samples=19 00:31:48.731 iops : min= 66, max= 184, avg=132.63, stdev=28.96, samples=19 00:31:48.731 lat (msec) : 250=1.15%, 500=2.02%, 750=15.63%, 1000=54.18%, 2000=27.02% 00:31:48.731 cpu : usr=0.13%, sys=2.77%, ctx=1172, majf=0, minf=32769 00:31:48.731 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:31:48.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.731 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.731 issued rwts: total=1388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.731 job2: (groupid=0, jobs=1): err= 0: pid=1793169: Wed Jul 24 07:21:01 2024 00:31:48.731 read: IOPS=42, BW=42.9MiB/s (45.0MB/s)(432MiB/10065msec) 00:31:48.731 slat (usec): min=74, max=135232, avg=23146.55, stdev=24893.88 00:31:48.731 clat (msec): min=62, max=4014, avg=2608.84, stdev=963.33 00:31:48.731 lat (msec): min=76, max=4044, avg=2631.98, stdev=963.93 00:31:48.731 clat percentiles (msec): 00:31:48.731 | 1.00th=[ 176], 5.00th=[ 418], 10.00th=[ 927], 20.00th=[ 2039], 00:31:48.731 | 30.00th=[ 2366], 40.00th=[ 2567], 50.00th=[ 2735], 60.00th=[ 3004], 00:31:48.731 | 70.00th=[ 3205], 80.00th=[ 3440], 90.00th=[ 3641], 95.00th=[ 3910], 00:31:48.731 | 99.00th=[ 3977], 99.50th=[ 4010], 99.90th=[ 4010], 99.95th=[ 4010], 00:31:48.731 | 99.99th=[ 4010] 00:31:48.731 bw ( KiB/s): min=18432, max=61317, per=1.30%, avg=38075.00, stdev=12019.71, samples=15 00:31:48.731 iops : min= 18, max= 59, avg=37.00, stdev=11.65, samples=15 00:31:48.731 lat (msec) : 100=0.69%, 250=1.39%, 500=3.94%, 750=2.78%, 1000=1.85% 00:31:48.731 lat (msec) : 2000=7.87%, >=2000=81.48% 00:31:48.731 cpu : usr=0.03%, sys=1.28%, ctx=1495, majf=0, minf=32769 00:31:48.731 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.4% 00:31:48.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.731 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:31:48.731 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.731 job2: (groupid=0, jobs=1): err= 0: pid=1793170: Wed Jul 24 07:21:01 2024 00:31:48.731 read: IOPS=44, BW=44.6MiB/s (46.8MB/s)(452MiB/10135msec) 00:31:48.731 slat (usec): min=44, max=2081.9k, avg=22136.93, stdev=168632.67 00:31:48.731 clat (msec): min=125, max=7405, avg=1705.38, stdev=2001.50 00:31:48.731 lat (msec): min=227, max=7409, avg=1727.51, stdev=2022.11 00:31:48.731 clat percentiles (msec): 00:31:48.731 | 1.00th=[ 232], 5.00th=[ 376], 10.00th=[ 523], 20.00th=[ 885], 00:31:48.731 | 30.00th=[ 1028], 40.00th=[ 1083], 50.00th=[ 1099], 60.00th=[ 1099], 00:31:48.731 | 70.00th=[ 1116], 80.00th=[ 1200], 90.00th=[ 7148], 95.00th=[ 7349], 00:31:48.732 | 99.00th=[ 7416], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:31:48.732 | 99.99th=[ 7416] 00:31:48.732 bw ( KiB/s): min=79872, max=139264, per=3.78%, avg=110933.33, stdev=20942.41, samples=6 00:31:48.732 iops : min= 78, max= 136, avg=108.33, stdev=20.45, samples=6 00:31:48.732 lat (msec) : 250=2.21%, 500=6.42%, 750=7.08%, 1000=7.52%, 2000=64.38% 00:31:48.732 lat (msec) : >=2000=12.39% 00:31:48.732 cpu : usr=0.03%, sys=1.51%, ctx=403, majf=0, minf=32769 00:31:48.732 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:31:48.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.732 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:31:48.732 issued rwts: total=452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.732 job2: (groupid=0, jobs=1): err= 0: pid=1793171: Wed Jul 24 07:21:01 2024 00:31:48.732 read: IOPS=46, BW=46.0MiB/s (48.3MB/s)(466MiB/10123msec) 00:31:48.732 slat (usec): min=506, max=138190, avg=21489.00, stdev=25270.26 00:31:48.732 clat (msec): min=105, max=3759, avg=2534.37, stdev=804.95 00:31:48.732 lat (msec): min=218, max=3767, avg=2555.85, stdev=804.52 00:31:48.732 clat percentiles (msec): 00:31:48.732 | 1.00th=[ 355], 5.00th=[ 869], 10.00th=[ 1167], 20.00th=[ 1821], 00:31:48.732 | 30.00th=[ 2333], 40.00th=[ 2601], 50.00th=[ 2735], 60.00th=[ 2869], 00:31:48.732 | 70.00th=[ 3004], 80.00th=[ 3272], 90.00th=[ 3440], 95.00th=[ 3507], 00:31:48.732 | 99.00th=[ 3675], 99.50th=[ 3675], 99.90th=[ 3775], 99.95th=[ 3775], 00:31:48.732 | 99.99th=[ 3775] 00:31:48.732 bw ( KiB/s): min=12288, max=69632, per=1.38%, avg=40619.65, stdev=14015.88, samples=17 00:31:48.732 iops : min= 12, max= 68, avg=39.53, stdev=13.69, samples=17 00:31:48.732 lat (msec) : 250=0.86%, 500=1.29%, 750=1.93%, 1000=3.00%, 2000=17.17% 00:31:48.732 lat (msec) : >=2000=75.75% 00:31:48.732 cpu : usr=0.05%, sys=1.46%, ctx=1449, majf=0, minf=32769 00:31:48.732 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:31:48.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.732 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:31:48.732 issued rwts: total=466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.732 job2: (groupid=0, jobs=1): err= 0: pid=1793172: Wed Jul 24 07:21:01 2024 00:31:48.732 read: IOPS=34, BW=34.4MiB/s (36.1MB/s)(346MiB/10063msec) 00:31:48.732 slat (usec): min=125, max=2050.4k, avg=28904.67, stdev=141290.50 00:31:48.732 clat (msec): min=59, max=6326, avg=2184.15, stdev=1363.18 00:31:48.732 lat (msec): min=65, max=6336, avg=2213.05, stdev=1387.19 00:31:48.732 clat percentiles (msec): 00:31:48.732 | 1.00th=[ 68], 5.00th=[ 201], 10.00th=[ 542], 20.00th=[ 1469], 00:31:48.732 | 30.00th=[ 1821], 40.00th=[ 2089], 50.00th=[ 2165], 60.00th=[ 2232], 00:31:48.732 | 70.00th=[ 2299], 80.00th=[ 2400], 90.00th=[ 4245], 95.00th=[ 6007], 00:31:48.732 | 99.00th=[ 6074], 99.50th=[ 6342], 99.90th=[ 6342], 99.95th=[ 6342], 00:31:48.732 | 99.99th=[ 6342] 00:31:48.732 bw ( KiB/s): min=40960, max=81920, per=1.91%, avg=55978.88, stdev=14958.12, samples=8 00:31:48.732 iops : min= 40, max= 80, avg=54.50, stdev=14.62, samples=8 00:31:48.732 lat (msec) : 100=3.47%, 250=2.60%, 500=3.18%, 750=4.05%, 1000=2.60% 00:31:48.732 lat (msec) : 2000=18.21%, >=2000=65.90% 00:31:48.732 cpu : usr=0.00%, sys=1.05%, ctx=1008, majf=0, minf=32769 00:31:48.732 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.8% 00:31:48.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.732 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:31:48.732 issued rwts: total=346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.732 job2: (groupid=0, jobs=1): err= 0: pid=1793173: Wed Jul 24 07:21:01 2024 00:31:48.732 read: IOPS=60, BW=60.1MiB/s (63.0MB/s)(610MiB/10148msec) 00:31:48.732 slat (usec): min=44, max=131069, avg=16450.55, stdev=21791.59 00:31:48.732 clat (msec): min=109, max=4508, avg=1924.41, stdev=560.08 00:31:48.732 lat (msec): min=220, max=4527, avg=1940.86, stdev=560.41 00:31:48.732 clat percentiles (msec): 00:31:48.732 | 1.00th=[ 271], 5.00th=[ 751], 10.00th=[ 1003], 20.00th=[ 1703], 00:31:48.732 | 30.00th=[ 1854], 40.00th=[ 1938], 50.00th=[ 1989], 60.00th=[ 2072], 00:31:48.732 | 70.00th=[ 2198], 80.00th=[ 2333], 90.00th=[ 2467], 95.00th=[ 2635], 00:31:48.732 | 99.00th=[ 2836], 99.50th=[ 2869], 99.90th=[ 4530], 99.95th=[ 4530], 00:31:48.732 | 99.99th=[ 4530] 00:31:48.732 bw ( KiB/s): min=28672, max=112640, per=1.98%, avg=58066.82, stdev=20568.66, samples=17 00:31:48.732 iops : min= 28, max= 110, avg=56.71, stdev=20.09, samples=17 00:31:48.732 lat (msec) : 250=0.33%, 500=2.30%, 750=1.97%, 1000=5.25%, 2000=41.64% 00:31:48.732 lat (msec) : >=2000=48.52% 00:31:48.732 cpu : usr=0.07%, sys=1.50%, ctx=1398, majf=0, minf=32769 00:31:48.732 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:31:48.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.732 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:31:48.732 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.732 job2: (groupid=0, jobs=1): err= 0: pid=1793174: Wed Jul 24 07:21:01 2024 00:31:48.732 read: IOPS=19, BW=19.2MiB/s (20.1MB/s)(194MiB/10114msec) 00:31:48.732 slat (usec): min=126, max=2100.5k, avg=51544.21, stdev=261166.49 00:31:48.732 clat (msec): min=112, max=9886, avg=4501.42, stdev=3613.26 00:31:48.732 lat (msec): min=114, max=9904, avg=4552.96, stdev=3621.24 00:31:48.732 clat percentiles (msec): 00:31:48.732 | 1.00th=[ 115], 5.00th=[ 226], 10.00th=[ 355], 20.00th=[ 592], 00:31:48.732 | 30.00th=[ 986], 40.00th=[ 1351], 50.00th=[ 3507], 60.00th=[ 7148], 00:31:48.732 | 70.00th=[ 7282], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9463], 00:31:48.732 | 99.00th=[ 9731], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:31:48.732 | 99.99th=[ 9866] 00:31:48.732 bw ( KiB/s): min= 2048, max=86016, per=1.56%, avg=45738.67, stdev=42087.94, samples=3 00:31:48.732 iops : min= 2, max= 84, avg=44.67, stdev=41.10, samples=3 00:31:48.732 lat (msec) : 250=6.70%, 500=8.76%, 750=9.79%, 1000=6.19%, 2000=10.31% 00:31:48.732 lat (msec) : >=2000=58.25% 00:31:48.732 cpu : usr=0.01%, sys=1.07%, ctx=414, majf=0, minf=32769 00:31:48.732 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.1%, 16=8.2%, 32=16.5%, >=64=67.5% 00:31:48.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.732 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:31:48.732 issued rwts: total=194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.732 job2: (groupid=0, jobs=1): err= 0: pid=1793175: Wed Jul 24 07:21:01 2024 00:31:48.732 read: IOPS=80, BW=80.1MiB/s (84.0MB/s)(809MiB/10101msec) 00:31:48.732 slat (usec): min=48, max=2062.1k, avg=12356.23, stdev=80485.39 00:31:48.732 clat (msec): min=97, max=4112, avg=1427.74, stdev=1154.53 00:31:48.732 lat (msec): min=153, max=4115, avg=1440.10, stdev=1158.39 00:31:48.732 clat percentiles (msec): 00:31:48.732 | 1.00th=[ 186], 5.00th=[ 435], 10.00th=[ 768], 20.00th=[ 802], 00:31:48.732 | 30.00th=[ 818], 40.00th=[ 844], 50.00th=[ 885], 60.00th=[ 986], 00:31:48.732 | 70.00th=[ 1116], 80.00th=[ 1972], 90.00th=[ 4044], 95.00th=[ 4077], 00:31:48.732 | 99.00th=[ 4111], 99.50th=[ 4111], 99.90th=[ 4111], 99.95th=[ 4111], 00:31:48.732 | 99.99th=[ 4111] 00:31:48.732 bw ( KiB/s): min= 2048, max=163840, per=3.36%, avg=98560.07, stdev=62584.59, samples=14 00:31:48.732 iops : min= 2, max= 160, avg=96.07, stdev=61.17, samples=14 00:31:48.732 lat (msec) : 100=0.12%, 250=1.98%, 500=3.71%, 750=3.71%, 1000=53.40% 00:31:48.732 lat (msec) : 2000=19.28%, >=2000=17.80% 00:31:48.732 cpu : usr=0.02%, sys=2.06%, ctx=985, majf=0, minf=32769 00:31:48.732 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:31:48.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.732 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.732 issued rwts: total=809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.733 job2: (groupid=0, jobs=1): err= 0: pid=1793176: Wed Jul 24 07:21:01 2024 00:31:48.733 read: IOPS=27, BW=27.4MiB/s (28.7MB/s)(354MiB/12925msec) 00:31:48.733 slat (usec): min=64, max=2058.6k, avg=30551.42, stdev=187983.34 00:31:48.733 clat (msec): min=1190, max=9041, avg=4382.00, stdev=2980.84 00:31:48.733 lat (msec): min=1203, max=9050, avg=4412.55, stdev=2981.06 00:31:48.733 clat percentiles (msec): 00:31:48.733 | 1.00th=[ 1200], 5.00th=[ 1385], 10.00th=[ 1720], 20.00th=[ 1955], 00:31:48.733 | 30.00th=[ 2140], 40.00th=[ 2299], 50.00th=[ 2601], 60.00th=[ 3138], 00:31:48.733 | 70.00th=[ 6409], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8792], 00:31:48.733 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 9060], 99.95th=[ 9060], 00:31:48.733 | 99.99th=[ 9060] 00:31:48.733 bw ( KiB/s): min= 2052, max=204800, per=1.76%, avg=51655.56, stdev=59318.96, samples=9 00:31:48.733 iops : min= 2, max= 200, avg=50.44, stdev=57.93, samples=9 00:31:48.733 lat (msec) : 2000=23.73%, >=2000=76.27% 00:31:48.733 cpu : usr=0.02%, sys=1.03%, ctx=746, majf=0, minf=32769 00:31:48.733 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.2% 00:31:48.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.733 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:31:48.733 issued rwts: total=354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.733 job2: (groupid=0, jobs=1): err= 0: pid=1793177: Wed Jul 24 07:21:01 2024 00:31:48.733 read: IOPS=51, BW=51.9MiB/s (54.5MB/s)(526MiB/10129msec) 00:31:48.733 slat (usec): min=100, max=1127.4k, avg=19029.78, stdev=52688.19 00:31:48.733 clat (msec): min=115, max=3388, avg=2102.05, stdev=631.81 00:31:48.733 lat (msec): min=252, max=3400, avg=2121.08, stdev=630.12 00:31:48.733 clat percentiles (msec): 00:31:48.733 | 1.00th=[ 489], 5.00th=[ 1070], 10.00th=[ 1418], 20.00th=[ 1653], 00:31:48.733 | 30.00th=[ 1821], 40.00th=[ 1921], 50.00th=[ 2056], 60.00th=[ 2198], 00:31:48.733 | 70.00th=[ 2299], 80.00th=[ 2534], 90.00th=[ 3171], 95.00th=[ 3306], 00:31:48.733 | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3373], 99.95th=[ 3373], 00:31:48.733 | 99.99th=[ 3373] 00:31:48.733 bw ( KiB/s): min=20480, max=108544, per=1.99%, avg=58221.71, stdev=22773.20, samples=14 00:31:48.733 iops : min= 20, max= 106, avg=56.86, stdev=22.24, samples=14 00:31:48.733 lat (msec) : 250=0.19%, 500=0.95%, 750=1.90%, 1000=1.90%, 2000=41.44% 00:31:48.733 lat (msec) : >=2000=53.61% 00:31:48.733 cpu : usr=0.00%, sys=1.44%, ctx=1320, majf=0, minf=32769 00:31:48.733 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:31:48.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.733 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:31:48.733 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.733 job2: (groupid=0, jobs=1): err= 0: pid=1793178: Wed Jul 24 07:21:01 2024 00:31:48.733 read: IOPS=43, BW=43.8MiB/s (45.9MB/s)(443MiB/10123msec) 00:31:48.733 slat (usec): min=120, max=154821, avg=22576.28, stdev=23788.71 00:31:48.733 clat (msec): min=118, max=3447, avg=2589.81, stdev=837.86 00:31:48.733 lat (msec): min=131, max=3499, avg=2612.39, stdev=836.45 00:31:48.733 clat percentiles (msec): 00:31:48.733 | 1.00th=[ 190], 5.00th=[ 426], 10.00th=[ 969], 20.00th=[ 2500], 00:31:48.733 | 30.00th=[ 2635], 40.00th=[ 2802], 50.00th=[ 2903], 60.00th=[ 3004], 00:31:48.733 | 70.00th=[ 3071], 80.00th=[ 3138], 90.00th=[ 3205], 95.00th=[ 3272], 00:31:48.733 | 99.00th=[ 3406], 99.50th=[ 3406], 99.90th=[ 3440], 99.95th=[ 3440], 00:31:48.733 | 99.99th=[ 3440] 00:31:48.733 bw ( KiB/s): min=22528, max=71680, per=1.47%, avg=43134.93, stdev=16414.95, samples=15 00:31:48.733 iops : min= 22, max= 70, avg=42.00, stdev=16.09, samples=15 00:31:48.733 lat (msec) : 250=1.81%, 500=4.74%, 750=1.81%, 1000=1.81%, 2000=5.87% 00:31:48.733 lat (msec) : >=2000=83.97% 00:31:48.733 cpu : usr=0.00%, sys=1.32%, ctx=1524, majf=0, minf=32769 00:31:48.733 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:31:48.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.733 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:31:48.733 issued rwts: total=443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.733 job3: (groupid=0, jobs=1): err= 0: pid=1793179: Wed Jul 24 07:21:01 2024 00:31:48.733 read: IOPS=7, BW=7687KiB/s (7871kB/s)(96.0MiB/12789msec) 00:31:48.733 slat (usec): min=462, max=2169.7k, avg=111312.05, stdev=423338.75 00:31:48.733 clat (msec): min=2102, max=12775, avg=11421.89, stdev=1769.25 00:31:48.733 lat (msec): min=4231, max=12788, avg=11533.20, stdev=1491.02 00:31:48.733 clat percentiles (msec): 00:31:48.733 | 1.00th=[ 2106], 5.00th=[ 6409], 10.00th=[10805], 20.00th=[11073], 00:31:48.733 | 30.00th=[11342], 40.00th=[11610], 50.00th=[11879], 60.00th=[12013], 00:31:48.733 | 70.00th=[12147], 80.00th=[12416], 90.00th=[12684], 95.00th=[12684], 00:31:48.733 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:31:48.733 | 99.99th=[12818] 00:31:48.733 lat (msec) : >=2000=100.00% 00:31:48.733 cpu : usr=0.02%, sys=0.53%, ctx=382, majf=0, minf=24577 00:31:48.733 IO depths : 1=1.0%, 2=2.1%, 4=4.2%, 8=8.3%, 16=16.7%, 32=33.3%, >=64=34.4% 00:31:48.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.733 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:31:48.733 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.733 job3: (groupid=0, jobs=1): err= 0: pid=1793180: Wed Jul 24 07:21:01 2024 00:31:48.733 read: IOPS=5, BW=6059KiB/s (6204kB/s)(76.0MiB/12845msec) 00:31:48.733 slat (usec): min=493, max=2219.2k, avg=141357.46, stdev=472430.44 00:31:48.733 clat (msec): min=2101, max=12820, avg=11350.06, stdev=1977.17 00:31:48.733 lat (msec): min=4171, max=12844, avg=11491.42, stdev=1666.81 00:31:48.733 clat percentiles (msec): 00:31:48.733 | 1.00th=[ 2106], 5.00th=[ 6275], 10.00th=[10805], 20.00th=[11073], 00:31:48.733 | 30.00th=[11342], 40.00th=[11610], 50.00th=[11745], 60.00th=[12013], 00:31:48.733 | 70.00th=[12281], 80.00th=[12550], 90.00th=[12550], 95.00th=[12818], 00:31:48.733 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:31:48.733 | 99.99th=[12818] 00:31:48.733 lat (msec) : >=2000=100.00% 00:31:48.733 cpu : usr=0.01%, sys=0.46%, ctx=429, majf=0, minf=19457 00:31:48.733 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.5%, 16=21.1%, 32=42.1%, >=64=17.1% 00:31:48.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.733 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:31:48.733 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.733 job3: (groupid=0, jobs=1): err= 0: pid=1793181: Wed Jul 24 07:21:01 2024 00:31:48.733 read: IOPS=85, BW=85.9MiB/s (90.1MB/s)(871MiB/10137msec) 00:31:48.733 slat (usec): min=45, max=149590, avg=11510.64, stdev=17109.61 00:31:48.733 clat (msec): min=106, max=3289, avg=1354.31, stdev=748.12 00:31:48.733 lat (msec): min=157, max=3309, avg=1365.82, stdev=750.58 00:31:48.733 clat percentiles (msec): 00:31:48.733 | 1.00th=[ 207], 5.00th=[ 477], 10.00th=[ 844], 20.00th=[ 869], 00:31:48.733 | 30.00th=[ 927], 40.00th=[ 995], 50.00th=[ 1167], 60.00th=[ 1200], 00:31:48.733 | 70.00th=[ 1385], 80.00th=[ 1670], 90.00th=[ 2869], 95.00th=[ 3205], 00:31:48.733 | 99.00th=[ 3272], 99.50th=[ 3272], 99.90th=[ 3306], 99.95th=[ 3306], 00:31:48.733 | 99.99th=[ 3306] 00:31:48.733 bw ( KiB/s): min=14336, max=155648, per=3.05%, avg=89501.47, stdev=52631.14, samples=17 00:31:48.733 iops : min= 14, max= 152, avg=87.35, stdev=51.42, samples=17 00:31:48.733 lat (msec) : 250=1.84%, 500=3.44%, 750=3.10%, 1000=32.03%, 2000=44.78% 00:31:48.733 lat (msec) : >=2000=14.81% 00:31:48.733 cpu : usr=0.03%, sys=1.72%, ctx=1342, majf=0, minf=32769 00:31:48.733 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:31:48.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.733 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.733 issued rwts: total=871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.733 job3: (groupid=0, jobs=1): err= 0: pid=1793182: Wed Jul 24 07:21:01 2024 00:31:48.733 read: IOPS=10, BW=10.9MiB/s (11.5MB/s)(141MiB/12911msec) 00:31:48.733 slat (usec): min=661, max=2178.7k, avg=76631.17, stdev=304701.90 00:31:48.733 clat (msec): min=2104, max=12699, avg=9378.39, stdev=2935.46 00:31:48.733 lat (msec): min=4201, max=12706, avg=9455.02, stdev=2882.30 00:31:48.733 clat percentiles (msec): 00:31:48.733 | 1.00th=[ 4212], 5.00th=[ 4732], 10.00th=[ 5201], 20.00th=[ 5738], 00:31:48.733 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10939], 60.00th=[11342], 00:31:48.733 | 70.00th=[11745], 80.00th=[12013], 90.00th=[12281], 95.00th=[12550], 00:31:48.733 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:31:48.733 | 99.99th=[12684] 00:31:48.733 bw ( KiB/s): min= 2052, max=14336, per=0.24%, avg=7169.00, stdev=5417.24, samples=4 00:31:48.733 iops : min= 2, max= 14, avg= 7.00, stdev= 5.29, samples=4 00:31:48.733 lat (msec) : >=2000=100.00% 00:31:48.733 cpu : usr=0.04%, sys=0.65%, ctx=639, majf=0, minf=32769 00:31:48.733 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.7%, 16=11.3%, 32=22.7%, >=64=55.3% 00:31:48.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.733 complete : 0=0.0%, 4=93.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=6.7% 00:31:48.733 issued rwts: total=141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.733 job3: (groupid=0, jobs=1): err= 0: pid=1793183: Wed Jul 24 07:21:01 2024 00:31:48.733 read: IOPS=43, BW=43.4MiB/s (45.5MB/s)(556MiB/12805msec) 00:31:48.733 slat (usec): min=39, max=2021.2k, avg=19238.88, stdev=97586.36 00:31:48.733 clat (msec): min=862, max=7884, avg=2591.56, stdev=2292.85 00:31:48.733 lat (msec): min=865, max=7893, avg=2610.80, stdev=2299.73 00:31:48.733 clat percentiles (msec): 00:31:48.733 | 1.00th=[ 869], 5.00th=[ 894], 10.00th=[ 911], 20.00th=[ 961], 00:31:48.733 | 30.00th=[ 995], 40.00th=[ 1036], 50.00th=[ 1318], 60.00th=[ 1989], 00:31:48.733 | 70.00th=[ 2735], 80.00th=[ 4597], 90.00th=[ 7483], 95.00th=[ 7752], 00:31:48.733 | 99.00th=[ 7886], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886], 00:31:48.733 | 99.99th=[ 7886] 00:31:48.734 bw ( KiB/s): min= 2052, max=143360, per=2.30%, avg=67582.38, stdev=57382.19, samples=13 00:31:48.734 iops : min= 2, max= 140, avg=65.92, stdev=56.12, samples=13 00:31:48.734 lat (msec) : 1000=35.43%, 2000=25.00%, >=2000=39.57% 00:31:48.734 cpu : usr=0.02%, sys=0.87%, ctx=1084, majf=0, minf=32769 00:31:48.734 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.7% 00:31:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.734 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:31:48.734 issued rwts: total=556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.734 job3: (groupid=0, jobs=1): err= 0: pid=1793184: Wed Jul 24 07:21:01 2024 00:31:48.734 read: IOPS=19, BW=19.9MiB/s (20.8MB/s)(256MiB/12877msec) 00:31:48.734 slat (usec): min=490, max=2084.5k, avg=42074.20, stdev=208880.15 00:31:48.734 clat (msec): min=1715, max=11075, avg=5998.50, stdev=2808.11 00:31:48.734 lat (msec): min=1720, max=11082, avg=6040.57, stdev=2814.05 00:31:48.734 clat percentiles (msec): 00:31:48.734 | 1.00th=[ 1787], 5.00th=[ 1955], 10.00th=[ 2022], 20.00th=[ 3608], 00:31:48.734 | 30.00th=[ 3943], 40.00th=[ 5873], 50.00th=[ 5940], 60.00th=[ 6208], 00:31:48.734 | 70.00th=[ 7684], 80.00th=[ 7752], 90.00th=[10671], 95.00th=[10805], 00:31:48.734 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:31:48.734 | 99.99th=[11073] 00:31:48.734 bw ( KiB/s): min= 2043, max=67584, per=1.00%, avg=29343.56, stdev=24740.24, samples=9 00:31:48.734 iops : min= 1, max= 66, avg=28.33, stdev=24.36, samples=9 00:31:48.734 lat (msec) : 2000=6.25%, >=2000=93.75% 00:31:48.734 cpu : usr=0.01%, sys=0.94%, ctx=633, majf=0, minf=32769 00:31:48.734 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.5%, >=64=75.4% 00:31:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.734 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:31:48.734 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.734 job3: (groupid=0, jobs=1): err= 0: pid=1793185: Wed Jul 24 07:21:01 2024 00:31:48.734 read: IOPS=13, BW=14.0MiB/s (14.6MB/s)(179MiB/12823msec) 00:31:48.734 slat (usec): min=1270, max=2120.6k, avg=59883.77, stdev=270906.14 00:31:48.734 clat (msec): min=2102, max=11706, avg=8182.16, stdev=3511.80 00:31:48.734 lat (msec): min=2405, max=11716, avg=8242.04, stdev=3482.59 00:31:48.734 clat percentiles (msec): 00:31:48.734 | 1.00th=[ 2400], 5.00th=[ 2500], 10.00th=[ 2802], 20.00th=[ 2903], 00:31:48.734 | 30.00th=[ 6342], 40.00th=[ 9194], 50.00th=[ 9731], 60.00th=[10537], 00:31:48.734 | 70.00th=[10805], 80.00th=[11073], 90.00th=[11342], 95.00th=[11610], 00:31:48.734 | 99.00th=[11745], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:31:48.734 | 99.99th=[11745] 00:31:48.734 bw ( KiB/s): min= 2043, max=38912, per=0.45%, avg=13311.87, stdev=13045.00, samples=8 00:31:48.734 iops : min= 1, max= 38, avg=12.87, stdev=12.87, samples=8 00:31:48.734 lat (msec) : >=2000=100.00% 00:31:48.734 cpu : usr=0.02%, sys=0.70%, ctx=684, majf=0, minf=32769 00:31:48.734 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.5%, 16=8.9%, 32=17.9%, >=64=64.8% 00:31:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.734 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:31:48.734 issued rwts: total=179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.734 job3: (groupid=0, jobs=1): err= 0: pid=1793186: Wed Jul 24 07:21:01 2024 00:31:48.734 read: IOPS=20, BW=20.8MiB/s (21.8MB/s)(226MiB/10866msec) 00:31:48.734 slat (usec): min=668, max=2160.0k, avg=47850.73, stdev=228040.54 00:31:48.734 clat (msec): min=50, max=8653, avg=5122.85, stdev=1752.35 00:31:48.734 lat (msec): min=2147, max=8688, avg=5170.70, stdev=1743.53 00:31:48.734 clat percentiles (msec): 00:31:48.734 | 1.00th=[ 2198], 5.00th=[ 2433], 10.00th=[ 2668], 20.00th=[ 3171], 00:31:48.734 | 30.00th=[ 3675], 40.00th=[ 4111], 50.00th=[ 6208], 60.00th=[ 6477], 00:31:48.734 | 70.00th=[ 6611], 80.00th=[ 6678], 90.00th=[ 6678], 95.00th=[ 6745], 00:31:48.734 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:31:48.734 | 99.99th=[ 8658] 00:31:48.734 bw ( KiB/s): min= 4096, max=53248, per=1.14%, avg=33450.67, stdev=18889.03, samples=6 00:31:48.734 iops : min= 4, max= 52, avg=32.67, stdev=18.45, samples=6 00:31:48.734 lat (msec) : 100=0.44%, >=2000=99.56% 00:31:48.734 cpu : usr=0.00%, sys=0.91%, ctx=717, majf=0, minf=32769 00:31:48.734 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.1%, 32=14.2%, >=64=72.1% 00:31:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.734 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:31:48.734 issued rwts: total=226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.734 job3: (groupid=0, jobs=1): err= 0: pid=1793187: Wed Jul 24 07:21:01 2024 00:31:48.734 read: IOPS=39, BW=39.2MiB/s (41.1MB/s)(425MiB/10851msec) 00:31:48.734 slat (usec): min=43, max=2132.5k, avg=25471.92, stdev=197091.00 00:31:48.734 clat (msec): min=22, max=7055, avg=2288.68, stdev=2151.90 00:31:48.734 lat (msec): min=661, max=7060, avg=2314.15, stdev=2167.54 00:31:48.734 clat percentiles (msec): 00:31:48.734 | 1.00th=[ 659], 5.00th=[ 676], 10.00th=[ 684], 20.00th=[ 726], 00:31:48.734 | 30.00th=[ 760], 40.00th=[ 1183], 50.00th=[ 1200], 60.00th=[ 1955], 00:31:48.734 | 70.00th=[ 2668], 80.00th=[ 2937], 90.00th=[ 6946], 95.00th=[ 7013], 00:31:48.734 | 99.00th=[ 7013], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:31:48.734 | 99.99th=[ 7080] 00:31:48.734 bw ( KiB/s): min=67584, max=180224, per=4.15%, avg=121651.20, stdev=53456.33, samples=5 00:31:48.734 iops : min= 66, max= 176, avg=118.80, stdev=52.20, samples=5 00:31:48.734 lat (msec) : 50=0.24%, 750=26.59%, 1000=12.94%, 2000=21.41%, >=2000=38.82% 00:31:48.734 cpu : usr=0.04%, sys=1.24%, ctx=363, majf=0, minf=32769 00:31:48.734 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.2% 00:31:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.734 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:31:48.734 issued rwts: total=425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.734 job3: (groupid=0, jobs=1): err= 0: pid=1793188: Wed Jul 24 07:21:01 2024 00:31:48.734 read: IOPS=14, BW=14.7MiB/s (15.4MB/s)(189MiB/12837msec) 00:31:48.734 slat (usec): min=443, max=2072.3k, avg=56784.51, stdev=283969.35 00:31:48.734 clat (msec): min=1510, max=11606, avg=7849.48, stdev=3855.73 00:31:48.734 lat (msec): min=1518, max=11610, avg=7906.27, stdev=3833.94 00:31:48.734 clat percentiles (msec): 00:31:48.734 | 1.00th=[ 1519], 5.00th=[ 1569], 10.00th=[ 1586], 20.00th=[ 3574], 00:31:48.734 | 30.00th=[ 4245], 40.00th=[ 8221], 50.00th=[10537], 60.00th=[10805], 00:31:48.734 | 70.00th=[10939], 80.00th=[11208], 90.00th=[11476], 95.00th=[11476], 00:31:48.734 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:31:48.734 | 99.99th=[11610] 00:31:48.734 bw ( KiB/s): min= 2052, max=53248, per=0.62%, avg=18128.00, stdev=17070.11, samples=7 00:31:48.734 iops : min= 2, max= 52, avg=17.29, stdev=16.81, samples=7 00:31:48.734 lat (msec) : 2000=17.46%, >=2000=82.54% 00:31:48.734 cpu : usr=0.00%, sys=0.71%, ctx=448, majf=0, minf=32769 00:31:48.734 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.5%, 32=16.9%, >=64=66.7% 00:31:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.734 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:31:48.734 issued rwts: total=189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.734 job3: (groupid=0, jobs=1): err= 0: pid=1793189: Wed Jul 24 07:21:01 2024 00:31:48.734 read: IOPS=18, BW=18.6MiB/s (19.5MB/s)(240MiB/12905msec) 00:31:48.734 slat (usec): min=49, max=2067.7k, avg=44993.88, stdev=256097.66 00:31:48.734 clat (msec): min=1160, max=11772, avg=6466.30, stdev=3824.40 00:31:48.734 lat (msec): min=1170, max=11775, avg=6511.30, stdev=3824.84 00:31:48.734 clat percentiles (msec): 00:31:48.734 | 1.00th=[ 1167], 5.00th=[ 1234], 10.00th=[ 1284], 20.00th=[ 1670], 00:31:48.734 | 30.00th=[ 3641], 40.00th=[ 5738], 50.00th=[ 6141], 60.00th=[ 6409], 00:31:48.734 | 70.00th=[10537], 80.00th=[10939], 90.00th=[11476], 95.00th=[11610], 00:31:48.734 | 99.00th=[11745], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:31:48.734 | 99.99th=[11745] 00:31:48.734 bw ( KiB/s): min= 2048, max=67584, per=0.99%, avg=28928.50, stdev=25752.55, samples=8 00:31:48.734 iops : min= 2, max= 66, avg=28.25, stdev=25.15, samples=8 00:31:48.734 lat (msec) : 2000=25.83%, >=2000=74.17% 00:31:48.734 cpu : usr=0.00%, sys=0.91%, ctx=450, majf=0, minf=32769 00:31:48.734 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.7%, 32=13.3%, >=64=73.8% 00:31:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.734 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:31:48.734 issued rwts: total=240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.734 job3: (groupid=0, jobs=1): err= 0: pid=1793190: Wed Jul 24 07:21:01 2024 00:31:48.734 read: IOPS=11, BW=11.8MiB/s (12.4MB/s)(153MiB/12936msec) 00:31:48.734 slat (usec): min=577, max=4200.6k, avg=70792.14, stdev=414621.42 00:31:48.734 clat (msec): min=1578, max=12903, avg=10322.34, stdev=3740.46 00:31:48.734 lat (msec): min=1579, max=12904, avg=10393.14, stdev=3685.34 00:31:48.734 clat percentiles (msec): 00:31:48.734 | 1.00th=[ 1603], 5.00th=[ 1838], 10.00th=[ 2056], 20.00th=[10671], 00:31:48.734 | 30.00th=[11073], 40.00th=[11610], 50.00th=[11879], 60.00th=[12147], 00:31:48.734 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:31:48.734 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:31:48.734 | 99.99th=[12953] 00:31:48.734 bw ( KiB/s): min= 2052, max=28672, per=0.36%, avg=10650.40, stdev=11160.36, samples=5 00:31:48.734 iops : min= 2, max= 28, avg=10.40, stdev=10.90, samples=5 00:31:48.734 lat (msec) : 2000=9.15%, >=2000=90.85% 00:31:48.734 cpu : usr=0.00%, sys=0.88%, ctx=438, majf=0, minf=32769 00:31:48.734 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.5%, 32=20.9%, >=64=58.8% 00:31:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.734 complete : 0=0.0%, 4=96.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.7% 00:31:48.734 issued rwts: total=153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.735 job3: (groupid=0, jobs=1): err= 0: pid=1793191: Wed Jul 24 07:21:01 2024 00:31:48.735 read: IOPS=15, BW=15.4MiB/s (16.2MB/s)(167MiB/10833msec) 00:31:48.735 slat (usec): min=89, max=2077.5k, avg=64556.79, stdev=315284.38 00:31:48.735 clat (msec): min=50, max=10724, avg=7489.92, stdev=2810.96 00:31:48.735 lat (msec): min=2015, max=10761, avg=7554.48, stdev=2760.03 00:31:48.735 clat percentiles (msec): 00:31:48.735 | 1.00th=[ 2022], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4329], 00:31:48.735 | 30.00th=[ 6409], 40.00th=[ 8221], 50.00th=[ 8423], 60.00th=[ 9463], 00:31:48.735 | 70.00th=[ 9597], 80.00th=[ 9866], 90.00th=[10134], 95.00th=[10134], 00:31:48.735 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:31:48.735 | 99.99th=[10671] 00:31:48.735 bw ( KiB/s): min=12288, max=43008, per=0.91%, avg=26624.00, stdev=15462.06, samples=3 00:31:48.735 iops : min= 12, max= 42, avg=26.00, stdev=15.10, samples=3 00:31:48.735 lat (msec) : 100=0.60%, >=2000=99.40% 00:31:48.735 cpu : usr=0.00%, sys=1.09%, ctx=311, majf=0, minf=32769 00:31:48.735 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.2%, >=64=62.3% 00:31:48.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.735 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:31:48.735 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.735 job4: (groupid=0, jobs=1): err= 0: pid=1793192: Wed Jul 24 07:21:01 2024 00:31:48.735 read: IOPS=26, BW=26.3MiB/s (27.6MB/s)(285MiB/10833msec) 00:31:48.735 slat (usec): min=58, max=2032.0k, avg=37697.88, stdev=230915.48 00:31:48.735 clat (msec): min=87, max=7691, avg=3281.89, stdev=1581.44 00:31:48.735 lat (msec): min=1241, max=7714, avg=3319.58, stdev=1599.04 00:31:48.735 clat percentiles (msec): 00:31:48.735 | 1.00th=[ 1234], 5.00th=[ 1250], 10.00th=[ 1267], 20.00th=[ 1368], 00:31:48.735 | 30.00th=[ 2433], 40.00th=[ 2735], 50.00th=[ 3004], 60.00th=[ 3272], 00:31:48.735 | 70.00th=[ 4799], 80.00th=[ 5000], 90.00th=[ 5201], 95.00th=[ 5269], 00:31:48.735 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 7684], 99.95th=[ 7684], 00:31:48.735 | 99.99th=[ 7684] 00:31:48.735 bw ( KiB/s): min= 4096, max=110592, per=2.19%, avg=64298.20, stdev=47702.32, samples=5 00:31:48.735 iops : min= 4, max= 108, avg=62.60, stdev=46.80, samples=5 00:31:48.735 lat (msec) : 100=0.35%, 2000=22.81%, >=2000=76.84% 00:31:48.735 cpu : usr=0.00%, sys=0.65%, ctx=431, majf=0, minf=32769 00:31:48.735 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=77.9% 00:31:48.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.735 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:31:48.735 issued rwts: total=285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.735 job4: (groupid=0, jobs=1): err= 0: pid=1793193: Wed Jul 24 07:21:01 2024 00:31:48.735 read: IOPS=19, BW=19.3MiB/s (20.3MB/s)(207MiB/10711msec) 00:31:48.735 slat (usec): min=44, max=2032.0k, avg=51625.00, stdev=273219.88 00:31:48.735 clat (msec): min=23, max=9512, avg=5994.42, stdev=3239.99 00:31:48.735 lat (msec): min=1403, max=9523, avg=6046.04, stdev=3216.48 00:31:48.735 clat percentiles (msec): 00:31:48.735 | 1.00th=[ 1401], 5.00th=[ 1435], 10.00th=[ 1469], 20.00th=[ 1569], 00:31:48.735 | 30.00th=[ 3373], 40.00th=[ 5470], 50.00th=[ 6477], 60.00th=[ 8658], 00:31:48.735 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9463], 00:31:48.735 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:31:48.735 | 99.99th=[ 9463] 00:31:48.735 bw ( KiB/s): min= 6144, max=69632, per=0.92%, avg=26958.50, stdev=23865.32, samples=6 00:31:48.735 iops : min= 6, max= 68, avg=26.17, stdev=23.36, samples=6 00:31:48.735 lat (msec) : 50=0.48%, 2000=24.64%, >=2000=74.88% 00:31:48.735 cpu : usr=0.00%, sys=0.98%, ctx=426, majf=0, minf=32769 00:31:48.735 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.9%, 16=7.7%, 32=15.5%, >=64=69.6% 00:31:48.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.735 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:31:48.735 issued rwts: total=207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.735 job4: (groupid=0, jobs=1): err= 0: pid=1793194: Wed Jul 24 07:21:01 2024 00:31:48.735 read: IOPS=13, BW=13.1MiB/s (13.8MB/s)(141MiB/10725msec) 00:31:48.735 slat (usec): min=386, max=4185.1k, avg=71051.28, stdev=417261.57 00:31:48.735 clat (msec): min=706, max=10543, avg=4117.90, stdev=3821.93 00:31:48.735 lat (msec): min=792, max=10543, avg=4188.95, stdev=3850.76 00:31:48.735 clat percentiles (msec): 00:31:48.735 | 1.00th=[ 793], 5.00th=[ 802], 10.00th=[ 936], 20.00th=[ 1099], 00:31:48.735 | 30.00th=[ 1334], 40.00th=[ 1552], 50.00th=[ 1787], 60.00th=[ 2089], 00:31:48.735 | 70.00th=[ 8490], 80.00th=[ 8557], 90.00th=[ 9866], 95.00th=[10537], 00:31:48.735 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:31:48.735 | 99.99th=[10537] 00:31:48.735 bw ( KiB/s): min=26746, max=26746, per=0.91%, avg=26746.00, stdev= 0.00, samples=1 00:31:48.735 iops : min= 26, max= 26, avg=26.00, stdev= 0.00, samples=1 00:31:48.735 lat (msec) : 750=0.71%, 1000=13.48%, 2000=43.97%, >=2000=41.84% 00:31:48.735 cpu : usr=0.00%, sys=0.80%, ctx=231, majf=0, minf=32769 00:31:48.735 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.7%, 16=11.3%, 32=22.7%, >=64=55.3% 00:31:48.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.735 complete : 0=0.0%, 4=93.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=6.7% 00:31:48.735 issued rwts: total=141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.735 job4: (groupid=0, jobs=1): err= 0: pid=1793195: Wed Jul 24 07:21:01 2024 00:31:48.735 read: IOPS=25, BW=25.0MiB/s (26.3MB/s)(270MiB/10779msec) 00:31:48.735 slat (usec): min=1069, max=2148.1k, avg=39830.31, stdev=250732.52 00:31:48.735 clat (msec): min=22, max=7596, avg=1997.19, stdev=1504.02 00:31:48.735 lat (msec): min=564, max=7599, avg=2037.02, stdev=1535.20 00:31:48.735 clat percentiles (msec): 00:31:48.735 | 1.00th=[ 558], 5.00th=[ 558], 10.00th=[ 567], 20.00th=[ 567], 00:31:48.735 | 30.00th=[ 726], 40.00th=[ 885], 50.00th=[ 2567], 60.00th=[ 2702], 00:31:48.735 | 70.00th=[ 2836], 80.00th=[ 2937], 90.00th=[ 3071], 95.00th=[ 4178], 00:31:48.735 | 99.00th=[ 7617], 99.50th=[ 7617], 99.90th=[ 7617], 99.95th=[ 7617], 00:31:48.735 | 99.99th=[ 7617] 00:31:48.735 bw ( KiB/s): min= 8192, max=192512, per=3.31%, avg=96938.67, stdev=92349.43, samples=3 00:31:48.735 iops : min= 8, max= 188, avg=94.67, stdev=90.18, samples=3 00:31:48.735 lat (msec) : 50=0.37%, 750=31.11%, 1000=13.70%, 2000=2.22%, >=2000=52.59% 00:31:48.735 cpu : usr=0.00%, sys=0.77%, ctx=639, majf=0, minf=32769 00:31:48.735 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.9%, >=64=76.7% 00:31:48.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.735 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:31:48.735 issued rwts: total=270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.735 job4: (groupid=0, jobs=1): err= 0: pid=1793196: Wed Jul 24 07:21:01 2024 00:31:48.735 read: IOPS=30, BW=30.8MiB/s (32.3MB/s)(329MiB/10674msec) 00:31:48.735 slat (usec): min=45, max=2158.0k, avg=32371.20, stdev=228988.65 00:31:48.735 clat (msec): min=21, max=7145, avg=3101.26, stdev=2974.34 00:31:48.735 lat (msec): min=350, max=7147, avg=3133.64, stdev=2972.27 00:31:48.735 clat percentiles (msec): 00:31:48.735 | 1.00th=[ 351], 5.00th=[ 401], 10.00th=[ 456], 20.00th=[ 550], 00:31:48.735 | 30.00th=[ 693], 40.00th=[ 835], 50.00th=[ 986], 60.00th=[ 2937], 00:31:48.735 | 70.00th=[ 6812], 80.00th=[ 6946], 90.00th=[ 7013], 95.00th=[ 7080], 00:31:48.735 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:31:48.735 | 99.99th=[ 7148] 00:31:48.735 bw ( KiB/s): min= 6131, max=247808, per=2.81%, avg=82327.00, stdev=107930.15, samples=5 00:31:48.735 iops : min= 5, max= 242, avg=80.20, stdev=105.58, samples=5 00:31:48.735 lat (msec) : 50=0.30%, 500=16.11%, 750=17.02%, 1000=20.36%, 2000=4.86% 00:31:48.735 lat (msec) : >=2000=41.34% 00:31:48.735 cpu : usr=0.03%, sys=1.26%, ctx=324, majf=0, minf=32769 00:31:48.735 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.7%, >=64=80.9% 00:31:48.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.735 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:31:48.735 issued rwts: total=329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.735 job4: (groupid=0, jobs=1): err= 0: pid=1793197: Wed Jul 24 07:21:01 2024 00:31:48.735 read: IOPS=35, BW=35.3MiB/s (37.0MB/s)(384MiB/10890msec) 00:31:48.735 slat (usec): min=84, max=2065.4k, avg=28115.28, stdev=204762.41 00:31:48.735 clat (msec): min=89, max=9197, avg=3445.41, stdev=3695.95 00:31:48.735 lat (msec): min=703, max=9201, avg=3473.53, stdev=3700.65 00:31:48.735 clat percentiles (msec): 00:31:48.735 | 1.00th=[ 701], 5.00th=[ 701], 10.00th=[ 709], 20.00th=[ 709], 00:31:48.735 | 30.00th=[ 718], 40.00th=[ 726], 50.00th=[ 844], 60.00th=[ 1053], 00:31:48.735 | 70.00th=[ 7013], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9060], 00:31:48.736 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:31:48.736 | 99.99th=[ 9194] 00:31:48.736 bw ( KiB/s): min= 6144, max=179864, per=2.55%, avg=74844.43, stdev=81312.74, samples=7 00:31:48.736 iops : min= 6, max= 175, avg=72.86, stdev=79.40, samples=7 00:31:48.736 lat (msec) : 100=0.26%, 750=46.35%, 1000=9.38%, 2000=7.55%, >=2000=36.46% 00:31:48.736 cpu : usr=0.03%, sys=1.49%, ctx=422, majf=0, minf=32128 00:31:48.736 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:31:48.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.736 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:31:48.736 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.736 job4: (groupid=0, jobs=1): err= 0: pid=1793198: Wed Jul 24 07:21:01 2024 00:31:48.736 read: IOPS=148, BW=148MiB/s (156MB/s)(1600MiB/10780msec) 00:31:48.736 slat (usec): min=40, max=2017.6k, avg=6243.70, stdev=71639.94 00:31:48.736 clat (msec): min=276, max=2782, avg=820.64, stdev=811.65 00:31:48.736 lat (msec): min=279, max=4148, avg=826.88, stdev=816.61 00:31:48.736 clat percentiles (msec): 00:31:48.736 | 1.00th=[ 279], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 284], 00:31:48.736 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 435], 60.00th=[ 651], 00:31:48.736 | 70.00th=[ 718], 80.00th=[ 1083], 90.00th=[ 2601], 95.00th=[ 2635], 00:31:48.736 | 99.00th=[ 2735], 99.50th=[ 2735], 99.90th=[ 2769], 99.95th=[ 2769], 00:31:48.736 | 99.99th=[ 2769] 00:31:48.736 bw ( KiB/s): min=32768, max=458752, per=7.91%, avg=232059.62, stdev=160034.45, samples=13 00:31:48.736 iops : min= 32, max= 448, avg=226.46, stdev=156.25, samples=13 00:31:48.736 lat (msec) : 500=53.88%, 750=18.00%, 1000=6.88%, 2000=5.25%, >=2000=16.00% 00:31:48.736 cpu : usr=0.15%, sys=2.49%, ctx=1390, majf=0, minf=32769 00:31:48.736 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:31:48.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.736 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.736 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.736 job4: (groupid=0, jobs=1): err= 0: pid=1793199: Wed Jul 24 07:21:01 2024 00:31:48.736 read: IOPS=30, BW=30.5MiB/s (32.0MB/s)(327MiB/10731msec) 00:31:48.736 slat (usec): min=48, max=3276.9k, avg=32740.45, stdev=251849.53 00:31:48.736 clat (msec): min=23, max=8555, avg=3203.27, stdev=2950.74 00:31:48.736 lat (msec): min=269, max=10664, avg=3236.02, stdev=2969.52 00:31:48.736 clat percentiles (msec): 00:31:48.736 | 1.00th=[ 268], 5.00th=[ 271], 10.00th=[ 271], 20.00th=[ 275], 00:31:48.736 | 30.00th=[ 313], 40.00th=[ 401], 50.00th=[ 3641], 60.00th=[ 3943], 00:31:48.736 | 70.00th=[ 4178], 80.00th=[ 7684], 90.00th=[ 7819], 95.00th=[ 7819], 00:31:48.736 | 99.00th=[ 7819], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:31:48.736 | 99.99th=[ 8557] 00:31:48.736 bw ( KiB/s): min= 2043, max=270336, per=2.32%, avg=67921.00, stdev=101120.48, samples=6 00:31:48.736 iops : min= 1, max= 264, avg=66.00, stdev=98.99, samples=6 00:31:48.736 lat (msec) : 50=0.31%, 500=43.43%, 2000=0.31%, >=2000=55.96% 00:31:48.736 cpu : usr=0.00%, sys=0.86%, ctx=557, majf=0, minf=32769 00:31:48.736 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.7% 00:31:48.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.736 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:31:48.736 issued rwts: total=327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.736 job4: (groupid=0, jobs=1): err= 0: pid=1793200: Wed Jul 24 07:21:01 2024 00:31:48.736 read: IOPS=41, BW=41.1MiB/s (43.1MB/s)(441MiB/10729msec) 00:31:48.736 slat (usec): min=45, max=2148.2k, avg=24272.00, stdev=196321.84 00:31:48.736 clat (msec): min=23, max=7444, avg=1361.38, stdev=1474.06 00:31:48.736 lat (msec): min=368, max=7451, avg=1385.66, stdev=1500.26 00:31:48.736 clat percentiles (msec): 00:31:48.736 | 1.00th=[ 372], 5.00th=[ 393], 10.00th=[ 422], 20.00th=[ 468], 00:31:48.736 | 30.00th=[ 542], 40.00th=[ 558], 50.00th=[ 584], 60.00th=[ 735], 00:31:48.736 | 70.00th=[ 2198], 80.00th=[ 2333], 90.00th=[ 2467], 95.00th=[ 2534], 00:31:48.736 | 99.00th=[ 7416], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:31:48.736 | 99.99th=[ 7416] 00:31:48.736 bw ( KiB/s): min=45056, max=309248, per=5.46%, avg=160229.25, stdev=132040.22, samples=4 00:31:48.736 iops : min= 44, max= 302, avg=156.25, stdev=129.19, samples=4 00:31:48.736 lat (msec) : 50=0.23%, 500=25.85%, 750=34.01%, 1000=5.67%, 2000=0.68% 00:31:48.736 lat (msec) : >=2000=33.56% 00:31:48.736 cpu : usr=0.01%, sys=0.98%, ctx=764, majf=0, minf=32769 00:31:48.736 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.7% 00:31:48.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.736 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:31:48.736 issued rwts: total=441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.736 job4: (groupid=0, jobs=1): err= 0: pid=1793201: Wed Jul 24 07:21:01 2024 00:31:48.736 read: IOPS=6, BW=6613KiB/s (6771kB/s)(70.0MiB/10840msec) 00:31:48.736 slat (usec): min=585, max=2061.7k, avg=153552.98, stdev=508597.03 00:31:48.736 clat (msec): min=90, max=10837, avg=8468.12, stdev=3409.58 00:31:48.736 lat (msec): min=2113, max=10839, avg=8621.67, stdev=3265.83 00:31:48.736 clat percentiles (msec): 00:31:48.736 | 1.00th=[ 91], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:31:48.736 | 30.00th=[ 8557], 40.00th=[10402], 50.00th=[10537], 60.00th=[10671], 00:31:48.736 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:31:48.736 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:31:48.736 | 99.99th=[10805] 00:31:48.736 lat (msec) : 100=1.43%, >=2000=98.57% 00:31:48.736 cpu : usr=0.00%, sys=0.51%, ctx=156, majf=0, minf=17921 00:31:48.736 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:31:48.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.736 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:31:48.736 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.736 job4: (groupid=0, jobs=1): err= 0: pid=1793202: Wed Jul 24 07:21:01 2024 00:31:48.736 read: IOPS=4, BW=4825KiB/s (4941kB/s)(51.0MiB/10823msec) 00:31:48.736 slat (usec): min=1200, max=2094.6k, avg=210310.68, stdev=615172.52 00:31:48.736 clat (msec): min=96, max=10818, avg=7855.06, stdev=3501.27 00:31:48.736 lat (msec): min=2117, max=10822, avg=8065.37, stdev=3344.54 00:31:48.736 clat percentiles (msec): 00:31:48.736 | 1.00th=[ 96], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 4329], 00:31:48.736 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:31:48.736 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:31:48.736 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:31:48.736 | 99.99th=[10805] 00:31:48.736 lat (msec) : 100=1.96%, >=2000=98.04% 00:31:48.736 cpu : usr=0.00%, sys=0.45%, ctx=94, majf=0, minf=13057 00:31:48.736 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:31:48.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.736 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:31:48.736 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.736 job4: (groupid=0, jobs=1): err= 0: pid=1793203: Wed Jul 24 07:21:01 2024 00:31:48.736 read: IOPS=2, BW=2759KiB/s (2825kB/s)(29.0MiB/10764msec) 00:31:48.736 slat (msec): min=2, max=2075, avg=367.80, stdev=778.97 00:31:48.736 clat (msec): min=96, max=10699, avg=5259.95, stdev=2819.91 00:31:48.736 lat (msec): min=2108, max=10763, avg=5627.74, stdev=2818.03 00:31:48.736 clat percentiles (msec): 00:31:48.736 | 1.00th=[ 97], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2198], 00:31:48.736 | 30.00th=[ 4279], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6409], 00:31:48.736 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[10671], 95.00th=[10671], 00:31:48.736 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:31:48.736 | 99.99th=[10671] 00:31:48.736 lat (msec) : 100=3.45%, >=2000=96.55% 00:31:48.736 cpu : usr=0.01%, sys=0.23%, ctx=74, majf=0, minf=7425 00:31:48.736 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:31:48.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.736 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:31:48.736 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.736 job4: (groupid=0, jobs=1): err= 0: pid=1793204: Wed Jul 24 07:21:01 2024 00:31:48.736 read: IOPS=91, BW=91.5MiB/s (96.0MB/s)(921MiB/10061msec) 00:31:48.736 slat (usec): min=44, max=137976, avg=10853.72, stdev=25085.13 00:31:48.736 clat (msec): min=59, max=3636, avg=1295.54, stdev=786.33 00:31:48.736 lat (msec): min=64, max=3639, avg=1306.39, stdev=788.56 00:31:48.736 clat percentiles (msec): 00:31:48.736 | 1.00th=[ 192], 5.00th=[ 827], 10.00th=[ 835], 20.00th=[ 844], 00:31:48.736 | 30.00th=[ 844], 40.00th=[ 919], 50.00th=[ 1011], 60.00th=[ 1083], 00:31:48.736 | 70.00th=[ 1234], 80.00th=[ 1368], 90.00th=[ 2937], 95.00th=[ 3272], 00:31:48.736 | 99.00th=[ 3641], 99.50th=[ 3641], 99.90th=[ 3641], 99.95th=[ 3641], 00:31:48.736 | 99.99th=[ 3641] 00:31:48.736 bw ( KiB/s): min= 8192, max=157696, per=3.16%, avg=92756.29, stdev=56722.51, samples=17 00:31:48.736 iops : min= 8, max= 154, avg=90.53, stdev=55.43, samples=17 00:31:48.736 lat (msec) : 100=0.54%, 250=0.65%, 500=1.41%, 750=1.52%, 1000=44.73% 00:31:48.736 lat (msec) : 2000=36.81%, >=2000=14.33% 00:31:48.736 cpu : usr=0.06%, sys=1.75%, ctx=1312, majf=0, minf=32769 00:31:48.736 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:31:48.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.736 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.736 issued rwts: total=921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.736 job5: (groupid=0, jobs=1): err= 0: pid=1793205: Wed Jul 24 07:21:01 2024 00:31:48.736 read: IOPS=81, BW=81.7MiB/s (85.7MB/s)(820MiB/10038msec) 00:31:48.736 slat (usec): min=43, max=2011.2k, avg=12215.42, stdev=73980.70 00:31:48.737 clat (msec): min=15, max=4050, avg=1075.69, stdev=434.18 00:31:48.737 lat (msec): min=64, max=4096, avg=1087.90, stdev=448.23 00:31:48.737 clat percentiles (msec): 00:31:48.737 | 1.00th=[ 93], 5.00th=[ 368], 10.00th=[ 718], 20.00th=[ 844], 00:31:48.737 | 30.00th=[ 911], 40.00th=[ 995], 50.00th=[ 1045], 60.00th=[ 1099], 00:31:48.737 | 70.00th=[ 1116], 80.00th=[ 1217], 90.00th=[ 1670], 95.00th=[ 1989], 00:31:48.737 | 99.00th=[ 2232], 99.50th=[ 2232], 99.90th=[ 4044], 99.95th=[ 4044], 00:31:48.737 | 99.99th=[ 4044] 00:31:48.737 bw ( KiB/s): min=40878, max=157696, per=4.00%, avg=117450.55, stdev=34273.66, samples=11 00:31:48.737 iops : min= 39, max= 154, avg=114.55, stdev=33.66, samples=11 00:31:48.737 lat (msec) : 20=0.12%, 100=1.59%, 250=2.20%, 500=3.66%, 750=3.78% 00:31:48.737 lat (msec) : 1000=31.10%, 2000=52.56%, >=2000=5.00% 00:31:48.737 cpu : usr=0.08%, sys=1.41%, ctx=947, majf=0, minf=32769 00:31:48.737 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:31:48.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.737 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.737 issued rwts: total=820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.737 job5: (groupid=0, jobs=1): err= 0: pid=1793206: Wed Jul 24 07:21:01 2024 00:31:48.737 read: IOPS=47, BW=47.8MiB/s (50.1MB/s)(482MiB/10090msec) 00:31:48.737 slat (usec): min=440, max=1934.0k, avg=20748.79, stdev=108038.94 00:31:48.737 clat (msec): min=86, max=5243, avg=2056.60, stdev=1263.97 00:31:48.737 lat (msec): min=94, max=5250, avg=2077.35, stdev=1268.88 00:31:48.737 clat percentiles (msec): 00:31:48.737 | 1.00th=[ 109], 5.00th=[ 558], 10.00th=[ 751], 20.00th=[ 818], 00:31:48.737 | 30.00th=[ 869], 40.00th=[ 1250], 50.00th=[ 1821], 60.00th=[ 2467], 00:31:48.737 | 70.00th=[ 3004], 80.00th=[ 3306], 90.00th=[ 3675], 95.00th=[ 4178], 00:31:48.737 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5269], 99.95th=[ 5269], 00:31:48.737 | 99.99th=[ 5269] 00:31:48.737 bw ( KiB/s): min=20480, max=202347, per=2.46%, avg=72195.10, stdev=59423.35, samples=10 00:31:48.737 iops : min= 20, max= 197, avg=70.30, stdev=57.99, samples=10 00:31:48.737 lat (msec) : 100=0.62%, 250=2.07%, 500=2.07%, 750=4.77%, 1000=25.52% 00:31:48.737 lat (msec) : 2000=18.67%, >=2000=46.27% 00:31:48.737 cpu : usr=0.00%, sys=1.29%, ctx=1297, majf=0, minf=32769 00:31:48.737 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=86.9% 00:31:48.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.737 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:31:48.737 issued rwts: total=482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.737 job5: (groupid=0, jobs=1): err= 0: pid=1793207: Wed Jul 24 07:21:01 2024 00:31:48.737 read: IOPS=294, BW=295MiB/s (309MB/s)(3192MiB/10824msec) 00:31:48.737 slat (usec): min=35, max=2039.5k, avg=3357.31, stdev=50174.68 00:31:48.737 clat (msec): min=99, max=4391, avg=355.62, stdev=502.58 00:31:48.737 lat (msec): min=130, max=4391, avg=358.98, stdev=507.92 00:31:48.737 clat percentiles (msec): 00:31:48.737 | 1.00th=[ 131], 5.00th=[ 140], 10.00th=[ 140], 20.00th=[ 140], 00:31:48.737 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 165], 60.00th=[ 264], 00:31:48.737 | 70.00th=[ 305], 80.00th=[ 401], 90.00th=[ 558], 95.00th=[ 2232], 00:31:48.737 | 99.00th=[ 2500], 99.50th=[ 2668], 99.90th=[ 2668], 99.95th=[ 2668], 00:31:48.737 | 99.99th=[ 4396] 00:31:48.737 bw ( KiB/s): min=77668, max=933888, per=16.46%, avg=482612.23, stdev=293237.38, samples=13 00:31:48.737 iops : min= 75, max= 912, avg=471.15, stdev=286.54, samples=13 00:31:48.737 lat (msec) : 100=0.03%, 250=57.74%, 500=29.86%, 750=7.21%, >=2000=5.17% 00:31:48.737 cpu : usr=0.09%, sys=2.42%, ctx=3601, majf=0, minf=32769 00:31:48.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:31:48.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.737 issued rwts: total=3192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.737 job5: (groupid=0, jobs=1): err= 0: pid=1793208: Wed Jul 24 07:21:01 2024 00:31:48.737 read: IOPS=52, BW=52.4MiB/s (55.0MB/s)(528MiB/10070msec) 00:31:48.737 slat (usec): min=56, max=2056.4k, avg=18962.55, stdev=150642.00 00:31:48.737 clat (msec): min=55, max=4998, avg=1313.37, stdev=1118.04 00:31:48.737 lat (msec): min=85, max=5197, avg=1332.34, stdev=1133.33 00:31:48.737 clat percentiles (msec): 00:31:48.737 | 1.00th=[ 125], 5.00th=[ 317], 10.00th=[ 510], 20.00th=[ 584], 00:31:48.737 | 30.00th=[ 609], 40.00th=[ 634], 50.00th=[ 735], 60.00th=[ 869], 00:31:48.737 | 70.00th=[ 1116], 80.00th=[ 3004], 90.00th=[ 3239], 95.00th=[ 3406], 00:31:48.737 | 99.00th=[ 3440], 99.50th=[ 3473], 99.90th=[ 5000], 99.95th=[ 5000], 00:31:48.737 | 99.99th=[ 5000] 00:31:48.737 bw ( KiB/s): min= 2048, max=219136, per=3.99%, avg=117101.83, stdev=78978.48, samples=6 00:31:48.737 iops : min= 2, max= 214, avg=114.33, stdev=77.14, samples=6 00:31:48.737 lat (msec) : 100=0.38%, 250=2.84%, 500=6.25%, 750=42.05%, 1000=16.29% 00:31:48.737 lat (msec) : 2000=7.01%, >=2000=25.19% 00:31:48.737 cpu : usr=0.00%, sys=1.13%, ctx=1091, majf=0, minf=32769 00:31:48.737 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.1% 00:31:48.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.737 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:31:48.737 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.737 job5: (groupid=0, jobs=1): err= 0: pid=1793209: Wed Jul 24 07:21:01 2024 00:31:48.737 read: IOPS=35, BW=35.8MiB/s (37.5MB/s)(359MiB/10032msec) 00:31:48.737 slat (usec): min=44, max=2076.7k, avg=27855.96, stdev=153455.84 00:31:48.737 clat (msec): min=28, max=6700, avg=1639.90, stdev=852.39 00:31:48.737 lat (msec): min=31, max=6798, avg=1667.76, stdev=896.02 00:31:48.737 clat percentiles (msec): 00:31:48.737 | 1.00th=[ 37], 5.00th=[ 326], 10.00th=[ 651], 20.00th=[ 1099], 00:31:48.737 | 30.00th=[ 1267], 40.00th=[ 1368], 50.00th=[ 1519], 60.00th=[ 1620], 00:31:48.737 | 70.00th=[ 2022], 80.00th=[ 2400], 90.00th=[ 2668], 95.00th=[ 2769], 00:31:48.737 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 6678], 99.95th=[ 6678], 00:31:48.737 | 99.99th=[ 6678] 00:31:48.737 bw ( KiB/s): min= 8192, max=169984, per=2.33%, avg=68252.33, stdev=54542.75, samples=6 00:31:48.737 iops : min= 8, max= 166, avg=66.50, stdev=53.35, samples=6 00:31:48.737 lat (msec) : 50=1.11%, 100=0.84%, 250=2.23%, 500=4.46%, 750=2.79% 00:31:48.737 lat (msec) : 1000=7.24%, 2000=50.97%, >=2000=30.36% 00:31:48.737 cpu : usr=0.01%, sys=1.00%, ctx=998, majf=0, minf=32769 00:31:48.737 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=8.9%, >=64=82.5% 00:31:48.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.737 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:31:48.737 issued rwts: total=359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.737 job5: (groupid=0, jobs=1): err= 0: pid=1793210: Wed Jul 24 07:21:01 2024 00:31:48.737 read: IOPS=67, BW=67.3MiB/s (70.6MB/s)(724MiB/10751msec) 00:31:48.737 slat (usec): min=45, max=2012.9k, avg=14710.26, stdev=90992.93 00:31:48.737 clat (msec): min=96, max=3689, avg=1725.88, stdev=902.19 00:31:48.737 lat (msec): min=523, max=3695, avg=1740.60, stdev=902.76 00:31:48.737 clat percentiles (msec): 00:31:48.737 | 1.00th=[ 542], 5.00th=[ 693], 10.00th=[ 701], 20.00th=[ 726], 00:31:48.737 | 30.00th=[ 844], 40.00th=[ 1083], 50.00th=[ 1854], 60.00th=[ 2299], 00:31:48.737 | 70.00th=[ 2400], 80.00th=[ 2500], 90.00th=[ 2702], 95.00th=[ 3339], 00:31:48.737 | 99.00th=[ 3641], 99.50th=[ 3675], 99.90th=[ 3675], 99.95th=[ 3675], 00:31:48.737 | 99.99th=[ 3675] 00:31:48.737 bw ( KiB/s): min= 4087, max=260096, per=2.97%, avg=87185.64, stdev=74115.98, samples=14 00:31:48.737 iops : min= 3, max= 254, avg=85.07, stdev=72.46, samples=14 00:31:48.737 lat (msec) : 100=0.14%, 500=0.14%, 750=22.24%, 1000=14.36%, 2000=16.57% 00:31:48.737 lat (msec) : >=2000=46.55% 00:31:48.737 cpu : usr=0.05%, sys=1.11%, ctx=1635, majf=0, minf=32769 00:31:48.737 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:31:48.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.737 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:31:48.737 issued rwts: total=724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.737 job5: (groupid=0, jobs=1): err= 0: pid=1793211: Wed Jul 24 07:21:01 2024 00:31:48.737 read: IOPS=26, BW=26.1MiB/s (27.4MB/s)(278MiB/10657msec) 00:31:48.737 slat (usec): min=450, max=2069.2k, avg=38322.31, stdev=198984.31 00:31:48.737 clat (usec): min=1289, max=6079.9k, avg=3661393.19, stdev=1437286.64 00:31:48.738 lat (msec): min=1826, max=6104, avg=3699.72, stdev=1419.16 00:31:48.738 clat percentiles (msec): 00:31:48.738 | 1.00th=[ 1838], 5.00th=[ 2072], 10.00th=[ 2165], 20.00th=[ 2232], 00:31:48.738 | 30.00th=[ 2299], 40.00th=[ 2400], 50.00th=[ 4044], 60.00th=[ 4279], 00:31:48.738 | 70.00th=[ 4597], 80.00th=[ 5067], 90.00th=[ 5873], 95.00th=[ 5940], 00:31:48.738 | 99.00th=[ 6007], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:31:48.738 | 99.99th=[ 6074] 00:31:48.738 bw ( KiB/s): min= 2048, max=79872, per=1.50%, avg=43879.29, stdev=28132.66, samples=7 00:31:48.738 iops : min= 2, max= 78, avg=42.71, stdev=27.60, samples=7 00:31:48.738 lat (msec) : 2=0.36%, 2000=2.88%, >=2000=96.76% 00:31:48.738 cpu : usr=0.01%, sys=0.81%, ctx=880, majf=0, minf=32769 00:31:48.738 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.5%, >=64=77.3% 00:31:48.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.738 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:31:48.738 issued rwts: total=278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.738 job5: (groupid=0, jobs=1): err= 0: pid=1793212: Wed Jul 24 07:21:01 2024 00:31:48.738 read: IOPS=226, BW=227MiB/s (238MB/s)(2281MiB/10069msec) 00:31:48.738 slat (usec): min=41, max=2032.2k, avg=4381.54, stdev=60394.47 00:31:48.738 clat (msec): min=62, max=2723, avg=439.37, stdev=577.91 00:31:48.738 lat (msec): min=68, max=2725, avg=443.76, stdev=581.90 00:31:48.738 clat percentiles (msec): 00:31:48.738 | 1.00th=[ 129], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 131], 00:31:48.738 | 30.00th=[ 132], 40.00th=[ 167], 50.00th=[ 279], 60.00th=[ 305], 00:31:48.738 | 70.00th=[ 376], 80.00th=[ 460], 90.00th=[ 944], 95.00th=[ 2400], 00:31:48.738 | 99.00th=[ 2668], 99.50th=[ 2702], 99.90th=[ 2735], 99.95th=[ 2735], 00:31:48.738 | 99.99th=[ 2735] 00:31:48.738 bw ( KiB/s): min=30720, max=997376, per=13.19%, avg=386862.73, stdev=287215.77, samples=11 00:31:48.738 iops : min= 30, max= 974, avg=377.73, stdev=280.55, samples=11 00:31:48.738 lat (msec) : 100=0.66%, 250=42.92%, 500=38.49%, 750=5.04%, 1000=5.39% 00:31:48.738 lat (msec) : 2000=1.49%, >=2000=6.01% 00:31:48.738 cpu : usr=0.05%, sys=2.88%, ctx=2031, majf=0, minf=32769 00:31:48.738 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:31:48.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.738 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.738 job5: (groupid=0, jobs=1): err= 0: pid=1793213: Wed Jul 24 07:21:01 2024 00:31:48.738 read: IOPS=47, BW=47.4MiB/s (49.7MB/s)(475MiB/10028msec) 00:31:48.738 slat (usec): min=43, max=2036.1k, avg=21049.10, stdev=131608.22 00:31:48.738 clat (msec): min=26, max=6749, avg=2510.85, stdev=1783.17 00:31:48.738 lat (msec): min=28, max=6759, avg=2531.90, stdev=1791.69 00:31:48.738 clat percentiles (msec): 00:31:48.738 | 1.00th=[ 69], 5.00th=[ 489], 10.00th=[ 709], 20.00th=[ 768], 00:31:48.738 | 30.00th=[ 944], 40.00th=[ 1053], 50.00th=[ 2735], 60.00th=[ 3675], 00:31:48.738 | 70.00th=[ 3842], 80.00th=[ 4329], 90.00th=[ 4530], 95.00th=[ 4732], 00:31:48.738 | 99.00th=[ 6678], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:31:48.738 | 99.99th=[ 6745] 00:31:48.738 bw ( KiB/s): min= 2048, max=183952, per=2.04%, avg=59713.27, stdev=49326.13, samples=11 00:31:48.738 iops : min= 2, max= 179, avg=58.09, stdev=48.05, samples=11 00:31:48.738 lat (msec) : 50=0.84%, 100=0.84%, 250=1.47%, 500=2.32%, 750=13.47% 00:31:48.738 lat (msec) : 1000=16.00%, 2000=14.95%, >=2000=50.11% 00:31:48.738 cpu : usr=0.04%, sys=1.12%, ctx=1020, majf=0, minf=32769 00:31:48.738 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.7% 00:31:48.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.738 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:31:48.738 issued rwts: total=475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.738 job5: (groupid=0, jobs=1): err= 0: pid=1793214: Wed Jul 24 07:21:01 2024 00:31:48.738 read: IOPS=62, BW=62.1MiB/s (65.1MB/s)(624MiB/10051msec) 00:31:48.738 slat (usec): min=51, max=2056.5k, avg=16045.33, stdev=114597.58 00:31:48.738 clat (msec): min=35, max=4097, avg=1953.80, stdev=1449.24 00:31:48.738 lat (msec): min=77, max=4157, avg=1969.84, stdev=1450.58 00:31:48.738 clat percentiles (msec): 00:31:48.738 | 1.00th=[ 92], 5.00th=[ 558], 10.00th=[ 584], 20.00th=[ 693], 00:31:48.738 | 30.00th=[ 735], 40.00th=[ 793], 50.00th=[ 1036], 60.00th=[ 3037], 00:31:48.738 | 70.00th=[ 3540], 80.00th=[ 3742], 90.00th=[ 3876], 95.00th=[ 3977], 00:31:48.738 | 99.00th=[ 4077], 99.50th=[ 4077], 99.90th=[ 4111], 99.95th=[ 4111], 00:31:48.738 | 99.99th=[ 4111] 00:31:48.738 bw ( KiB/s): min= 2048, max=186368, per=2.74%, avg=80386.33, stdev=65955.46, samples=12 00:31:48.738 iops : min= 2, max= 182, avg=78.42, stdev=64.45, samples=12 00:31:48.738 lat (msec) : 50=0.16%, 100=1.12%, 250=1.12%, 500=1.60%, 750=31.25% 00:31:48.738 lat (msec) : 1000=14.42%, 2000=9.62%, >=2000=40.71% 00:31:48.738 cpu : usr=0.05%, sys=1.69%, ctx=1172, majf=0, minf=32769 00:31:48.738 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:31:48.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.738 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:31:48.738 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.738 job5: (groupid=0, jobs=1): err= 0: pid=1793215: Wed Jul 24 07:21:01 2024 00:31:48.738 read: IOPS=211, BW=212MiB/s (222MB/s)(2120MiB/10016msec) 00:31:48.738 slat (usec): min=40, max=2065.9k, avg=4712.32, stdev=76826.57 00:31:48.738 clat (msec): min=15, max=4613, avg=454.17, stdev=873.81 00:31:48.738 lat (msec): min=16, max=4614, avg=458.88, stdev=879.59 00:31:48.738 clat percentiles (msec): 00:31:48.738 | 1.00th=[ 40], 5.00th=[ 132], 10.00th=[ 140], 20.00th=[ 140], 00:31:48.738 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 279], 00:31:48.738 | 70.00th=[ 279], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 2467], 00:31:48.738 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:31:48.738 | 99.99th=[ 4597] 00:31:48.738 bw ( KiB/s): min= 6144, max=917716, per=17.38%, avg=509861.38, stdev=294706.42, samples=8 00:31:48.738 iops : min= 6, max= 896, avg=497.88, stdev=287.76, samples=8 00:31:48.738 lat (msec) : 20=0.24%, 50=1.27%, 100=2.12%, 250=49.91%, 500=37.50% 00:31:48.738 lat (msec) : >=2000=8.96% 00:31:48.738 cpu : usr=0.11%, sys=2.87%, ctx=1912, majf=0, minf=32769 00:31:48.738 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:31:48.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.738 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.738 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.738 job5: (groupid=0, jobs=1): err= 0: pid=1793216: Wed Jul 24 07:21:01 2024 00:31:48.738 read: IOPS=35, BW=35.0MiB/s (36.7MB/s)(381MiB/10882msec) 00:31:48.738 slat (usec): min=61, max=2023.9k, avg=28302.20, stdev=174906.36 00:31:48.738 clat (msec): min=96, max=5722, avg=3111.79, stdev=1687.23 00:31:48.738 lat (msec): min=921, max=5725, avg=3140.10, stdev=1679.77 00:31:48.738 clat percentiles (msec): 00:31:48.738 | 1.00th=[ 919], 5.00th=[ 1083], 10.00th=[ 1200], 20.00th=[ 1301], 00:31:48.738 | 30.00th=[ 1469], 40.00th=[ 2005], 50.00th=[ 3608], 60.00th=[ 4329], 00:31:48.738 | 70.00th=[ 4396], 80.00th=[ 4866], 90.00th=[ 5269], 95.00th=[ 5604], 00:31:48.738 | 99.00th=[ 5738], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:31:48.738 | 99.99th=[ 5738] 00:31:48.738 bw ( KiB/s): min=10240, max=143360, per=2.21%, avg=64768.00, stdev=43869.25, samples=8 00:31:48.738 iops : min= 10, max= 140, avg=63.25, stdev=42.84, samples=8 00:31:48.738 lat (msec) : 100=0.26%, 1000=2.89%, 2000=36.48%, >=2000=60.37% 00:31:48.738 cpu : usr=0.01%, sys=1.15%, ctx=958, majf=0, minf=32769 00:31:48.738 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5% 00:31:48.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.738 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:31:48.738 issued rwts: total=381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.738 job5: (groupid=0, jobs=1): err= 0: pid=1793217: Wed Jul 24 07:21:01 2024 00:31:48.738 read: IOPS=105, BW=105MiB/s (110MB/s)(1058MiB/10057msec) 00:31:48.738 slat (usec): min=36, max=2002.1k, avg=9461.28, stdev=64028.09 00:31:48.738 clat (msec): min=40, max=3776, avg=895.17, stdev=467.35 00:31:48.738 lat (msec): min=58, max=3778, avg=904.64, stdev=476.70 00:31:48.738 clat percentiles (msec): 00:31:48.738 | 1.00th=[ 99], 5.00th=[ 334], 10.00th=[ 592], 20.00th=[ 659], 00:31:48.738 | 30.00th=[ 684], 40.00th=[ 718], 50.00th=[ 751], 60.00th=[ 827], 00:31:48.738 | 70.00th=[ 936], 80.00th=[ 1200], 90.00th=[ 1418], 95.00th=[ 1703], 00:31:48.738 | 99.00th=[ 3641], 99.50th=[ 3675], 99.90th=[ 3708], 99.95th=[ 3775], 00:31:48.738 | 99.99th=[ 3775] 00:31:48.738 bw ( KiB/s): min=53248, max=194560, per=4.99%, avg=146470.46, stdev=51119.82, samples=13 00:31:48.738 iops : min= 52, max= 190, avg=142.92, stdev=49.98, samples=13 00:31:48.738 lat (msec) : 50=0.09%, 100=1.04%, 250=2.36%, 500=4.63%, 750=41.12% 00:31:48.738 lat (msec) : 1000=22.02%, 2000=27.50%, >=2000=1.23% 00:31:48.738 cpu : usr=0.07%, sys=1.55%, ctx=1169, majf=0, minf=32769 00:31:48.738 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:31:48.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.738 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.738 issued rwts: total=1058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.738 00:31:48.738 Run status group 0 (all jobs): 00:31:48.738 READ: bw=2864MiB/s (3003MB/s), 2161KiB/s-295MiB/s (2213kB/s-309MB/s), io=36.2GiB (38.8GB), run=10016-12936msec 00:31:48.738 00:31:48.738 Disk stats (read/write): 00:31:48.738 nvme0n1: ios=36682/0, merge=0/0, ticks=6022965/0, in_queue=6022965, util=98.41% 00:31:48.738 nvme1n1: ios=32343/0, merge=0/0, ticks=6606330/0, in_queue=6606330, util=98.29% 00:31:48.738 nvme2n1: ios=51639/0, merge=0/0, ticks=8246717/0, in_queue=8246717, util=98.91% 00:31:48.738 nvme3n1: ios=28573/0, merge=0/0, ticks=5710382/0, in_queue=5710382, util=98.95% 00:31:48.738 nvme4n1: ios=40018/0, merge=0/0, ticks=5741211/0, in_queue=5741211, util=99.01% 00:31:48.738 nvme5n1: ios=106428/0, merge=0/0, ticks=7994377/0, in_queue=7994377, util=99.11% 00:31:48.739 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:31:48.739 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:31:48.739 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:31:48.739 07:21:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:31:48.739 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # local i=0 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # grep -q -w SPDK00000000000000 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # grep -q -w SPDK00000000000000 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # return 0 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:31:48.739 07:21:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:49.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:49.666 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:31:49.666 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # local i=0 00:31:49.666 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:31:49.666 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # grep -q -w SPDK00000000000001 00:31:49.666 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # grep -q -w SPDK00000000000001 00:31:49.666 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:31:49.666 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # return 0 00:31:49.667 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.667 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.667 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:49.667 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.667 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:31:49.667 07:21:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:31:50.597 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # local i=0 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # grep -q -w SPDK00000000000002 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # grep -q -w SPDK00000000000002 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # return 0 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:31:50.597 07:21:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:31:51.527 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # local i=0 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # grep -q -w SPDK00000000000003 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # grep -q -w SPDK00000000000003 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # return 0 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:31:51.527 07:21:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:31:52.456 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # local i=0 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # grep -q -w SPDK00000000000004 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # grep -q -w SPDK00000000000004 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # return 0 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:31:52.456 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:31:53.385 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:31:53.385 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:31:53.385 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # local i=0 00:31:53.385 07:21:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:31:53.385 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # grep -q -w SPDK00000000000005 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # grep -q -w SPDK00000000000005 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # return 0 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:53.642 rmmod nvme_rdma 00:31:53.642 rmmod nvme_fabrics 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 1791542 ']' 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 1791542 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@948 -- # '[' -z 1791542 ']' 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # kill -0 1791542 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # uname 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1791542 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1791542' 00:31:53.642 killing process with pid 1791542 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # kill 1791542 00:31:53.642 07:21:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # wait 1791542 00:31:56.177 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:56.177 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:56.177 00:31:56.177 real 0m39.165s 00:31:56.177 user 2m9.137s 00:31:56.177 sys 0m18.479s 00:31:56.177 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:56.177 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:31:56.177 ************************************ 00:31:56.177 END TEST nvmf_srq_overwhelm 00:31:56.177 ************************************ 00:31:56.177 07:21:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:31:56.177 07:21:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:56.177 07:21:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:56.177 07:21:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:56.434 ************************************ 00:31:56.434 START TEST nvmf_shutdown 00:31:56.434 ************************************ 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:31:56.434 * Looking for test storage... 00:31:56.434 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.434 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:56.435 ************************************ 00:31:56.435 START TEST nvmf_shutdown_tc1 00:31:56.435 ************************************ 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:56.435 07:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:32:04.534 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:04.535 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:04.535 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:04.535 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:04.535 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:04.535 07:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:04.535 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:32:04.536 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:04.536 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:04.536 altname enp217s0f0np0 00:32:04.536 altname ens818f0np0 00:32:04.536 inet 192.168.100.8/24 scope global mlx_0_0 00:32:04.536 valid_lft forever preferred_lft forever 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:32:04.536 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:04.536 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:04.536 altname enp217s0f1np1 00:32:04.536 altname ens818f1np1 00:32:04.536 inet 192.168.100.9/24 scope global mlx_0_1 00:32:04.536 valid_lft forever preferred_lft forever 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:04.536 192.168.100.9' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:04.536 192.168.100.9' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:04.536 192.168.100.9' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:04.536 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:04.794 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:32:04.794 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:04.794 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1801289 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1801289 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1801289 ']' 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:04.795 07:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:04.795 [2024-07-24 07:21:19.244530] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:04.795 [2024-07-24 07:21:19.244621] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.795 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.795 [2024-07-24 07:21:19.390911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:05.053 [2024-07-24 07:21:19.600924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.053 [2024-07-24 07:21:19.600968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.053 [2024-07-24 07:21:19.600982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.053 [2024-07-24 07:21:19.600993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.053 [2024-07-24 07:21:19.601004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.053 [2024-07-24 07:21:19.601156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:05.053 [2024-07-24 07:21:19.601242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:05.053 [2024-07-24 07:21:19.601326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.053 [2024-07-24 07:21:19.601342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:05.618 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:05.619 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:32:05.619 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:05.619 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:05.619 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:05.619 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.619 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:05.619 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.619 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:05.619 [2024-07-24 07:21:20.103477] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f0395fbd940) succeed. 00:32:05.619 [2024-07-24 07:21:20.112907] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f0395f76940) succeed. 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.877 07:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:06.135 Malloc1 00:32:06.135 [2024-07-24 07:21:20.586164] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:06.135 Malloc2 00:32:06.392 Malloc3 00:32:06.392 Malloc4 00:32:06.392 Malloc5 00:32:06.650 Malloc6 00:32:06.650 Malloc7 00:32:06.907 Malloc8 00:32:06.907 Malloc9 00:32:07.165 Malloc10 00:32:07.165 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.165 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:32:07.165 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:07.165 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:07.165 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1801661 00:32:07.165 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1801661 /var/tmp/bdevperf.sock 00:32:07.165 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1801661 ']' 00:32:07.165 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:07.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.166 "hdgst": ${hdgst:-false}, 00:32:07.166 "ddgst": ${ddgst:-false} 00:32:07.166 }, 00:32:07.166 "method": "bdev_nvme_attach_controller" 00:32:07.166 } 00:32:07.166 EOF 00:32:07.166 )") 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.166 "hdgst": ${hdgst:-false}, 00:32:07.166 "ddgst": ${ddgst:-false} 00:32:07.166 }, 00:32:07.166 "method": "bdev_nvme_attach_controller" 00:32:07.166 } 00:32:07.166 EOF 00:32:07.166 )") 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.166 "hdgst": ${hdgst:-false}, 00:32:07.166 "ddgst": ${ddgst:-false} 00:32:07.166 }, 00:32:07.166 "method": "bdev_nvme_attach_controller" 00:32:07.166 } 00:32:07.166 EOF 00:32:07.166 )") 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.166 "hdgst": ${hdgst:-false}, 00:32:07.166 "ddgst": ${ddgst:-false} 00:32:07.166 }, 00:32:07.166 "method": "bdev_nvme_attach_controller" 00:32:07.166 } 00:32:07.166 EOF 00:32:07.166 )") 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.166 "hdgst": ${hdgst:-false}, 00:32:07.166 "ddgst": ${ddgst:-false} 00:32:07.166 }, 00:32:07.166 "method": "bdev_nvme_attach_controller" 00:32:07.166 } 00:32:07.166 EOF 00:32:07.166 )") 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.166 "hdgst": ${hdgst:-false}, 00:32:07.166 "ddgst": ${ddgst:-false} 00:32:07.166 }, 00:32:07.166 "method": "bdev_nvme_attach_controller" 00:32:07.166 } 00:32:07.166 EOF 00:32:07.166 )") 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.166 "hdgst": ${hdgst:-false}, 00:32:07.166 "ddgst": ${ddgst:-false} 00:32:07.166 }, 00:32:07.166 "method": "bdev_nvme_attach_controller" 00:32:07.166 } 00:32:07.166 EOF 00:32:07.166 )") 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.166 "hdgst": ${hdgst:-false}, 00:32:07.166 "ddgst": ${ddgst:-false} 00:32:07.166 }, 00:32:07.166 "method": "bdev_nvme_attach_controller" 00:32:07.166 } 00:32:07.166 EOF 00:32:07.166 )") 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.166 "hdgst": ${hdgst:-false}, 00:32:07.166 "ddgst": ${ddgst:-false} 00:32:07.166 }, 00:32:07.166 "method": "bdev_nvme_attach_controller" 00:32:07.166 } 00:32:07.166 EOF 00:32:07.166 )") 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.166 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.166 { 00:32:07.166 "params": { 00:32:07.166 "name": "Nvme$subsystem", 00:32:07.166 "trtype": "$TEST_TRANSPORT", 00:32:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.166 "adrfam": "ipv4", 00:32:07.166 "trsvcid": "$NVMF_PORT", 00:32:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.167 "hdgst": ${hdgst:-false}, 00:32:07.167 "ddgst": ${ddgst:-false} 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 } 00:32:07.167 EOF 00:32:07.167 )") 00:32:07.167 [2024-07-24 07:21:21.713957] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:07.167 [2024-07-24 07:21:21.714067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:07.167 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:07.167 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:32:07.167 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:32:07.167 07:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme1", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 },{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme2", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 },{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme3", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 },{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme4", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 },{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme5", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 },{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme6", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 },{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme7", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 },{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme8", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 },{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme9", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 },{ 00:32:07.167 "params": { 00:32:07.167 "name": "Nvme10", 00:32:07.167 "trtype": "rdma", 00:32:07.167 "traddr": "192.168.100.8", 00:32:07.167 "adrfam": "ipv4", 00:32:07.167 "trsvcid": "4420", 00:32:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:07.167 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:07.167 "hdgst": false, 00:32:07.167 "ddgst": false 00:32:07.167 }, 00:32:07.167 "method": "bdev_nvme_attach_controller" 00:32:07.167 }' 00:32:07.424 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.424 [2024-07-24 07:21:21.867045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.682 [2024-07-24 07:21:22.094370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.614 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:08.614 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:32:08.614 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:08.614 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.614 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:08.614 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.614 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1801661 00:32:08.614 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:32:08.614 07:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:32:09.981 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1801661 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1801289 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.981 { 00:32:09.981 "params": { 00:32:09.981 "name": "Nvme$subsystem", 00:32:09.981 "trtype": "$TEST_TRANSPORT", 00:32:09.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.981 "adrfam": "ipv4", 00:32:09.981 "trsvcid": "$NVMF_PORT", 00:32:09.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.981 "hdgst": ${hdgst:-false}, 00:32:09.981 "ddgst": ${ddgst:-false} 00:32:09.981 }, 00:32:09.981 "method": "bdev_nvme_attach_controller" 00:32:09.981 } 00:32:09.981 EOF 00:32:09.981 )") 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.981 { 00:32:09.981 "params": { 00:32:09.981 "name": "Nvme$subsystem", 00:32:09.981 "trtype": "$TEST_TRANSPORT", 00:32:09.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.981 "adrfam": "ipv4", 00:32:09.981 "trsvcid": "$NVMF_PORT", 00:32:09.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.981 "hdgst": ${hdgst:-false}, 00:32:09.981 "ddgst": ${ddgst:-false} 00:32:09.981 }, 00:32:09.981 "method": "bdev_nvme_attach_controller" 00:32:09.981 } 00:32:09.981 EOF 00:32:09.981 )") 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.981 { 00:32:09.981 "params": { 00:32:09.981 "name": "Nvme$subsystem", 00:32:09.981 "trtype": "$TEST_TRANSPORT", 00:32:09.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.981 "adrfam": "ipv4", 00:32:09.981 "trsvcid": "$NVMF_PORT", 00:32:09.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.981 "hdgst": ${hdgst:-false}, 00:32:09.981 "ddgst": ${ddgst:-false} 00:32:09.981 }, 00:32:09.981 "method": "bdev_nvme_attach_controller" 00:32:09.981 } 00:32:09.981 EOF 00:32:09.981 )") 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.981 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.981 { 00:32:09.981 "params": { 00:32:09.981 "name": "Nvme$subsystem", 00:32:09.981 "trtype": "$TEST_TRANSPORT", 00:32:09.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.981 "adrfam": "ipv4", 00:32:09.981 "trsvcid": "$NVMF_PORT", 00:32:09.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.981 "hdgst": ${hdgst:-false}, 00:32:09.982 "ddgst": ${ddgst:-false} 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 } 00:32:09.982 EOF 00:32:09.982 )") 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.982 { 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme$subsystem", 00:32:09.982 "trtype": "$TEST_TRANSPORT", 00:32:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "$NVMF_PORT", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.982 "hdgst": ${hdgst:-false}, 00:32:09.982 "ddgst": ${ddgst:-false} 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 } 00:32:09.982 EOF 00:32:09.982 )") 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.982 { 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme$subsystem", 00:32:09.982 "trtype": "$TEST_TRANSPORT", 00:32:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "$NVMF_PORT", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.982 "hdgst": ${hdgst:-false}, 00:32:09.982 "ddgst": ${ddgst:-false} 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 } 00:32:09.982 EOF 00:32:09.982 )") 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.982 { 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme$subsystem", 00:32:09.982 "trtype": "$TEST_TRANSPORT", 00:32:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "$NVMF_PORT", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.982 "hdgst": ${hdgst:-false}, 00:32:09.982 "ddgst": ${ddgst:-false} 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 } 00:32:09.982 EOF 00:32:09.982 )") 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.982 { 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme$subsystem", 00:32:09.982 "trtype": "$TEST_TRANSPORT", 00:32:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "$NVMF_PORT", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.982 "hdgst": ${hdgst:-false}, 00:32:09.982 "ddgst": ${ddgst:-false} 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 } 00:32:09.982 EOF 00:32:09.982 )") 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.982 { 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme$subsystem", 00:32:09.982 "trtype": "$TEST_TRANSPORT", 00:32:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "$NVMF_PORT", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.982 "hdgst": ${hdgst:-false}, 00:32:09.982 "ddgst": ${ddgst:-false} 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 } 00:32:09.982 EOF 00:32:09.982 )") 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.982 { 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme$subsystem", 00:32:09.982 "trtype": "$TEST_TRANSPORT", 00:32:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "$NVMF_PORT", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.982 "hdgst": ${hdgst:-false}, 00:32:09.982 "ddgst": ${ddgst:-false} 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 } 00:32:09.982 EOF 00:32:09.982 )") 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:32:09.982 [2024-07-24 07:21:24.327076] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:09.982 [2024-07-24 07:21:24.327200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1802214 ] 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:32:09.982 07:21:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme1", 00:32:09.982 "trtype": "rdma", 00:32:09.982 "traddr": "192.168.100.8", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "4420", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:09.982 "hdgst": false, 00:32:09.982 "ddgst": false 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 },{ 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme2", 00:32:09.982 "trtype": "rdma", 00:32:09.982 "traddr": "192.168.100.8", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "4420", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:09.982 "hdgst": false, 00:32:09.982 "ddgst": false 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 },{ 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme3", 00:32:09.982 "trtype": "rdma", 00:32:09.982 "traddr": "192.168.100.8", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "4420", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:09.982 "hdgst": false, 00:32:09.982 "ddgst": false 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 },{ 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme4", 00:32:09.982 "trtype": "rdma", 00:32:09.982 "traddr": "192.168.100.8", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "4420", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:09.982 "hdgst": false, 00:32:09.982 "ddgst": false 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 },{ 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme5", 00:32:09.982 "trtype": "rdma", 00:32:09.982 "traddr": "192.168.100.8", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "4420", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:09.982 "hdgst": false, 00:32:09.982 "ddgst": false 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 },{ 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme6", 00:32:09.982 "trtype": "rdma", 00:32:09.982 "traddr": "192.168.100.8", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "4420", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:09.982 "hdgst": false, 00:32:09.982 "ddgst": false 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.982 },{ 00:32:09.982 "params": { 00:32:09.982 "name": "Nvme7", 00:32:09.982 "trtype": "rdma", 00:32:09.982 "traddr": "192.168.100.8", 00:32:09.982 "adrfam": "ipv4", 00:32:09.982 "trsvcid": "4420", 00:32:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:09.982 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:09.982 "hdgst": false, 00:32:09.982 "ddgst": false 00:32:09.982 }, 00:32:09.982 "method": "bdev_nvme_attach_controller" 00:32:09.983 },{ 00:32:09.983 "params": { 00:32:09.983 "name": "Nvme8", 00:32:09.983 "trtype": "rdma", 00:32:09.983 "traddr": "192.168.100.8", 00:32:09.983 "adrfam": "ipv4", 00:32:09.983 "trsvcid": "4420", 00:32:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:09.983 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:09.983 "hdgst": false, 00:32:09.983 "ddgst": false 00:32:09.983 }, 00:32:09.983 "method": "bdev_nvme_attach_controller" 00:32:09.983 },{ 00:32:09.983 "params": { 00:32:09.983 "name": "Nvme9", 00:32:09.983 "trtype": "rdma", 00:32:09.983 "traddr": "192.168.100.8", 00:32:09.983 "adrfam": "ipv4", 00:32:09.983 "trsvcid": "4420", 00:32:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:09.983 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:09.983 "hdgst": false, 00:32:09.983 "ddgst": false 00:32:09.983 }, 00:32:09.983 "method": "bdev_nvme_attach_controller" 00:32:09.983 },{ 00:32:09.983 "params": { 00:32:09.983 "name": "Nvme10", 00:32:09.983 "trtype": "rdma", 00:32:09.983 "traddr": "192.168.100.8", 00:32:09.983 "adrfam": "ipv4", 00:32:09.983 "trsvcid": "4420", 00:32:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:09.983 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:09.983 "hdgst": false, 00:32:09.983 "ddgst": false 00:32:09.983 }, 00:32:09.983 "method": "bdev_nvme_attach_controller" 00:32:09.983 }' 00:32:09.983 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.983 [2024-07-24 07:21:24.480149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.239 [2024-07-24 07:21:24.709064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.605 Running I/O for 1 seconds... 00:32:12.534 00:32:12.534 Latency(us) 00:32:12.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.534 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme1n1 : 1.18 325.38 20.34 0.00 0.00 190364.06 46976.20 231525.58 00:32:12.534 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme2n1 : 1.18 326.68 20.42 0.00 0.00 186874.17 5347.74 219781.53 00:32:12.534 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme3n1 : 1.18 346.64 21.66 0.00 0.00 172514.06 4639.95 152672.67 00:32:12.534 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme4n1 : 1.18 347.11 21.69 0.00 0.00 170157.61 13107.20 145122.92 00:32:12.534 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme5n1 : 1.19 337.52 21.10 0.00 0.00 171077.01 17930.65 133378.87 00:32:12.534 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme6n1 : 1.19 346.43 21.65 0.00 0.00 165498.08 20552.09 124990.26 00:32:12.534 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme7n1 : 1.19 375.45 23.47 0.00 0.00 154336.84 4744.81 114923.93 00:32:12.534 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme8n1 : 1.19 374.92 23.43 0.00 0.00 152266.05 5400.17 110729.63 00:32:12.534 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme9n1 : 1.20 374.29 23.39 0.00 0.00 150472.47 6474.96 122473.68 00:32:12.534 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.534 Verification LBA range: start 0x0 length 0x400 00:32:12.534 Nvme10n1 : 1.19 322.32 20.14 0.00 0.00 171745.96 11429.48 176999.63 00:32:12.535 =================================================================================================================== 00:32:12.535 Total : 3476.75 217.30 0.00 0.00 167813.44 4639.95 231525.58 00:32:13.903 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:13.904 rmmod nvme_rdma 00:32:13.904 rmmod nvme_fabrics 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1801289 ']' 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1801289 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1801289 ']' 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1801289 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:13.904 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1801289 00:32:14.161 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:14.161 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:14.161 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1801289' 00:32:14.161 killing process with pid 1801289 00:32:14.161 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1801289 00:32:14.161 07:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1801289 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:18.380 00:32:18.380 real 0m21.294s 00:32:18.380 user 0m54.780s 00:32:18.380 sys 0m7.898s 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:18.380 ************************************ 00:32:18.380 END TEST nvmf_shutdown_tc1 00:32:18.380 ************************************ 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:18.380 ************************************ 00:32:18.380 START TEST nvmf_shutdown_tc2 00:32:18.380 ************************************ 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:18.380 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:18.380 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:18.381 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:18.381 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:18.381 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:32:18.381 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:18.381 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:18.381 altname enp217s0f0np0 00:32:18.381 altname ens818f0np0 00:32:18.381 inet 192.168.100.8/24 scope global mlx_0_0 00:32:18.381 valid_lft forever preferred_lft forever 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:32:18.381 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:18.381 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:18.381 altname enp217s0f1np1 00:32:18.381 altname ens818f1np1 00:32:18.381 inet 192.168.100.9/24 scope global mlx_0_1 00:32:18.381 valid_lft forever preferred_lft forever 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:18.381 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:18.382 192.168.100.9' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:18.382 192.168.100.9' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:18.382 192.168.100.9' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1803656 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1803656 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1803656 ']' 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.382 07:21:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:18.382 [2024-07-24 07:21:32.701463] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:18.382 [2024-07-24 07:21:32.701556] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.382 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.382 [2024-07-24 07:21:32.849381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:18.640 [2024-07-24 07:21:33.054413] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.640 [2024-07-24 07:21:33.054454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.640 [2024-07-24 07:21:33.054468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.640 [2024-07-24 07:21:33.054478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.640 [2024-07-24 07:21:33.054490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.640 [2024-07-24 07:21:33.054622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.640 [2024-07-24 07:21:33.054689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.640 [2024-07-24 07:21:33.054780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.640 [2024-07-24 07:21:33.054808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:18.896 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.896 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:32:18.896 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:18.896 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:18.896 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:18.896 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.896 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:18.896 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.896 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:19.151 [2024-07-24 07:21:33.551919] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7ff937d32940) succeed. 00:32:19.151 [2024-07-24 07:21:33.561444] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7ff937cee940) succeed. 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.408 07:21:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:19.408 Malloc1 00:32:19.666 [2024-07-24 07:21:34.051319] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:19.666 Malloc2 00:32:19.666 Malloc3 00:32:19.923 Malloc4 00:32:19.923 Malloc5 00:32:20.180 Malloc6 00:32:20.180 Malloc7 00:32:20.180 Malloc8 00:32:20.438 Malloc9 00:32:20.438 Malloc10 00:32:20.438 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.438 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:32:20.438 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:20.438 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1803996 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1803996 /var/tmp/bdevperf.sock 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1803996 ']' 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:20.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.695 { 00:32:20.695 "params": { 00:32:20.695 "name": "Nvme$subsystem", 00:32:20.695 "trtype": "$TEST_TRANSPORT", 00:32:20.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.695 "adrfam": "ipv4", 00:32:20.695 "trsvcid": "$NVMF_PORT", 00:32:20.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.695 "hdgst": ${hdgst:-false}, 00:32:20.695 "ddgst": ${ddgst:-false} 00:32:20.695 }, 00:32:20.695 "method": "bdev_nvme_attach_controller" 00:32:20.695 } 00:32:20.695 EOF 00:32:20.695 )") 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.695 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.695 { 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme$subsystem", 00:32:20.696 "trtype": "$TEST_TRANSPORT", 00:32:20.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "$NVMF_PORT", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.696 "hdgst": ${hdgst:-false}, 00:32:20.696 "ddgst": ${ddgst:-false} 00:32:20.696 }, 00:32:20.696 "method": "bdev_nvme_attach_controller" 00:32:20.696 } 00:32:20.696 EOF 00:32:20.696 )") 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.696 { 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme$subsystem", 00:32:20.696 "trtype": "$TEST_TRANSPORT", 00:32:20.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "$NVMF_PORT", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.696 "hdgst": ${hdgst:-false}, 00:32:20.696 "ddgst": ${ddgst:-false} 00:32:20.696 }, 00:32:20.696 "method": "bdev_nvme_attach_controller" 00:32:20.696 } 00:32:20.696 EOF 00:32:20.696 )") 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.696 { 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme$subsystem", 00:32:20.696 "trtype": "$TEST_TRANSPORT", 00:32:20.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "$NVMF_PORT", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.696 "hdgst": ${hdgst:-false}, 00:32:20.696 "ddgst": ${ddgst:-false} 00:32:20.696 }, 00:32:20.696 "method": "bdev_nvme_attach_controller" 00:32:20.696 } 00:32:20.696 EOF 00:32:20.696 )") 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.696 { 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme$subsystem", 00:32:20.696 "trtype": "$TEST_TRANSPORT", 00:32:20.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "$NVMF_PORT", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.696 "hdgst": ${hdgst:-false}, 00:32:20.696 "ddgst": ${ddgst:-false} 00:32:20.696 }, 00:32:20.696 "method": "bdev_nvme_attach_controller" 00:32:20.696 } 00:32:20.696 EOF 00:32:20.696 )") 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.696 { 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme$subsystem", 00:32:20.696 "trtype": "$TEST_TRANSPORT", 00:32:20.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "$NVMF_PORT", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.696 "hdgst": ${hdgst:-false}, 00:32:20.696 "ddgst": ${ddgst:-false} 00:32:20.696 }, 00:32:20.696 "method": "bdev_nvme_attach_controller" 00:32:20.696 } 00:32:20.696 EOF 00:32:20.696 )") 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.696 { 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme$subsystem", 00:32:20.696 "trtype": "$TEST_TRANSPORT", 00:32:20.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "$NVMF_PORT", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.696 "hdgst": ${hdgst:-false}, 00:32:20.696 "ddgst": ${ddgst:-false} 00:32:20.696 }, 00:32:20.696 "method": "bdev_nvme_attach_controller" 00:32:20.696 } 00:32:20.696 EOF 00:32:20.696 )") 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.696 { 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme$subsystem", 00:32:20.696 "trtype": "$TEST_TRANSPORT", 00:32:20.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "$NVMF_PORT", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.696 "hdgst": ${hdgst:-false}, 00:32:20.696 "ddgst": ${ddgst:-false} 00:32:20.696 }, 00:32:20.696 "method": "bdev_nvme_attach_controller" 00:32:20.696 } 00:32:20.696 EOF 00:32:20.696 )") 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.696 { 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme$subsystem", 00:32:20.696 "trtype": "$TEST_TRANSPORT", 00:32:20.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "$NVMF_PORT", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.696 "hdgst": ${hdgst:-false}, 00:32:20.696 "ddgst": ${ddgst:-false} 00:32:20.696 }, 00:32:20.696 "method": "bdev_nvme_attach_controller" 00:32:20.696 } 00:32:20.696 EOF 00:32:20.696 )") 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.696 { 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme$subsystem", 00:32:20.696 "trtype": "$TEST_TRANSPORT", 00:32:20.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "$NVMF_PORT", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.696 "hdgst": ${hdgst:-false}, 00:32:20.696 "ddgst": ${ddgst:-false} 00:32:20.696 }, 00:32:20.696 "method": "bdev_nvme_attach_controller" 00:32:20.696 } 00:32:20.696 EOF 00:32:20.696 )") 00:32:20.696 [2024-07-24 07:21:35.185685] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:20.696 [2024-07-24 07:21:35.185778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803996 ] 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:32:20.696 07:21:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:20.696 "params": { 00:32:20.696 "name": "Nvme1", 00:32:20.696 "trtype": "rdma", 00:32:20.696 "traddr": "192.168.100.8", 00:32:20.696 "adrfam": "ipv4", 00:32:20.696 "trsvcid": "4420", 00:32:20.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:20.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:20.696 "hdgst": false, 00:32:20.696 "ddgst": false 00:32:20.696 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 },{ 00:32:20.697 "params": { 00:32:20.697 "name": "Nvme2", 00:32:20.697 "trtype": "rdma", 00:32:20.697 "traddr": "192.168.100.8", 00:32:20.697 "adrfam": "ipv4", 00:32:20.697 "trsvcid": "4420", 00:32:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:20.697 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:20.697 "hdgst": false, 00:32:20.697 "ddgst": false 00:32:20.697 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 },{ 00:32:20.697 "params": { 00:32:20.697 "name": "Nvme3", 00:32:20.697 "trtype": "rdma", 00:32:20.697 "traddr": "192.168.100.8", 00:32:20.697 "adrfam": "ipv4", 00:32:20.697 "trsvcid": "4420", 00:32:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:20.697 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:20.697 "hdgst": false, 00:32:20.697 "ddgst": false 00:32:20.697 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 },{ 00:32:20.697 "params": { 00:32:20.697 "name": "Nvme4", 00:32:20.697 "trtype": "rdma", 00:32:20.697 "traddr": "192.168.100.8", 00:32:20.697 "adrfam": "ipv4", 00:32:20.697 "trsvcid": "4420", 00:32:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:20.697 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:20.697 "hdgst": false, 00:32:20.697 "ddgst": false 00:32:20.697 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 },{ 00:32:20.697 "params": { 00:32:20.697 "name": "Nvme5", 00:32:20.697 "trtype": "rdma", 00:32:20.697 "traddr": "192.168.100.8", 00:32:20.697 "adrfam": "ipv4", 00:32:20.697 "trsvcid": "4420", 00:32:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:20.697 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:20.697 "hdgst": false, 00:32:20.697 "ddgst": false 00:32:20.697 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 },{ 00:32:20.697 "params": { 00:32:20.697 "name": "Nvme6", 00:32:20.697 "trtype": "rdma", 00:32:20.697 "traddr": "192.168.100.8", 00:32:20.697 "adrfam": "ipv4", 00:32:20.697 "trsvcid": "4420", 00:32:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:20.697 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:20.697 "hdgst": false, 00:32:20.697 "ddgst": false 00:32:20.697 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 },{ 00:32:20.697 "params": { 00:32:20.697 "name": "Nvme7", 00:32:20.697 "trtype": "rdma", 00:32:20.697 "traddr": "192.168.100.8", 00:32:20.697 "adrfam": "ipv4", 00:32:20.697 "trsvcid": "4420", 00:32:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:20.697 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:20.697 "hdgst": false, 00:32:20.697 "ddgst": false 00:32:20.697 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 },{ 00:32:20.697 "params": { 00:32:20.697 "name": "Nvme8", 00:32:20.697 "trtype": "rdma", 00:32:20.697 "traddr": "192.168.100.8", 00:32:20.697 "adrfam": "ipv4", 00:32:20.697 "trsvcid": "4420", 00:32:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:20.697 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:20.697 "hdgst": false, 00:32:20.697 "ddgst": false 00:32:20.697 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 },{ 00:32:20.697 "params": { 00:32:20.697 "name": "Nvme9", 00:32:20.697 "trtype": "rdma", 00:32:20.697 "traddr": "192.168.100.8", 00:32:20.697 "adrfam": "ipv4", 00:32:20.697 "trsvcid": "4420", 00:32:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:20.697 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:20.697 "hdgst": false, 00:32:20.697 "ddgst": false 00:32:20.697 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 },{ 00:32:20.697 "params": { 00:32:20.697 "name": "Nvme10", 00:32:20.697 "trtype": "rdma", 00:32:20.697 "traddr": "192.168.100.8", 00:32:20.697 "adrfam": "ipv4", 00:32:20.697 "trsvcid": "4420", 00:32:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:20.697 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:20.697 "hdgst": false, 00:32:20.697 "ddgst": false 00:32:20.697 }, 00:32:20.697 "method": "bdev_nvme_attach_controller" 00:32:20.697 }' 00:32:20.697 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.954 [2024-07-24 07:21:35.337513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.954 [2024-07-24 07:21:35.563197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.325 Running I/O for 10 seconds... 00:32:22.325 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:22.325 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:32:22.325 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:22.325 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.325 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.582 07:21:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.582 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=19 00:32:22.582 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 19 -ge 100 ']' 00:32:22.582 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:32:22.840 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:32:22.840 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:32:22.840 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:22.840 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:32:22.840 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.840 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=168 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 168 -ge 100 ']' 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1803996 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1803996 ']' 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1803996 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1803996 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1803996' 00:32:23.097 killing process with pid 1803996 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1803996 00:32:23.097 07:21:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1803996 00:32:23.097 Received shutdown signal, test time was about 0.910238 seconds 00:32:23.097 00:32:23.097 Latency(us) 00:32:23.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.097 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.097 Verification LBA range: start 0x0 length 0x400 00:32:23.097 Nvme1n1 : 0.89 327.44 20.46 0.00 0.00 190998.92 6868.17 248302.80 00:32:23.097 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.097 Verification LBA range: start 0x0 length 0x400 00:32:23.097 Nvme2n1 : 0.90 357.04 22.31 0.00 0.00 172087.13 6239.03 175321.91 00:32:23.097 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.097 Verification LBA range: start 0x0 length 0x400 00:32:23.097 Nvme3n1 : 0.90 356.48 22.28 0.00 0.00 169147.92 10380.90 167772.16 00:32:23.097 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.097 Verification LBA range: start 0x0 length 0x400 00:32:23.097 Nvme4n1 : 0.90 355.93 22.25 0.00 0.00 166196.02 10643.05 161061.27 00:32:23.097 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.097 Verification LBA range: start 0x0 length 0x400 00:32:23.097 Nvme5n1 : 0.90 355.20 22.20 0.00 0.00 164051.76 11377.05 148478.36 00:32:23.097 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.097 Verification LBA range: start 0x0 length 0x400 00:32:23.097 Nvme6n1 : 0.90 354.65 22.17 0.00 0.00 160375.52 11744.05 141767.48 00:32:23.097 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.097 Verification LBA range: start 0x0 length 0x400 00:32:23.097 Nvme7n1 : 0.90 354.11 22.13 0.00 0.00 157345.22 11953.77 134217.73 00:32:23.097 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.097 Verification LBA range: start 0x0 length 0x400 00:32:23.097 Nvme8n1 : 0.91 353.46 22.09 0.00 0.00 154914.32 12478.05 123312.54 00:32:23.097 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.097 Verification LBA range: start 0x0 length 0x400 00:32:23.098 Nvme9n1 : 0.91 352.71 22.04 0.00 0.00 152447.63 13369.34 109890.76 00:32:23.098 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:23.098 Verification LBA range: start 0x0 length 0x400 00:32:23.098 Nvme10n1 : 0.91 281.53 17.60 0.00 0.00 187108.76 10957.62 256691.40 00:32:23.098 =================================================================================================================== 00:32:23.098 Total : 3448.53 215.53 0.00 0.00 166858.64 6239.03 256691.40 00:32:24.467 07:21:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1803656 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:25.394 rmmod nvme_rdma 00:32:25.394 rmmod nvme_fabrics 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1803656 ']' 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1803656 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1803656 ']' 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1803656 00:32:25.394 07:21:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:32:25.394 07:21:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:25.394 07:21:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1803656 00:32:25.651 07:21:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:25.651 07:21:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:25.651 07:21:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1803656' 00:32:25.651 killing process with pid 1803656 00:32:25.651 07:21:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1803656 00:32:25.651 07:21:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1803656 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:29.836 00:32:29.836 real 0m11.444s 00:32:29.836 user 0m43.425s 00:32:29.836 sys 0m1.614s 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 ************************************ 00:32:29.836 END TEST nvmf_shutdown_tc2 00:32:29.836 ************************************ 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 ************************************ 00:32:29.836 START TEST nvmf_shutdown_tc3 00:32:29.836 ************************************ 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.836 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:29.837 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:29.837 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:29.837 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:29.837 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:32:29.837 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:29.837 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:29.837 altname enp217s0f0np0 00:32:29.837 altname ens818f0np0 00:32:29.837 inet 192.168.100.8/24 scope global mlx_0_0 00:32:29.837 valid_lft forever preferred_lft forever 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:32:29.837 07:21:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:29.837 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:29.837 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:29.837 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:29.837 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:29.837 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:32:29.838 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:29.838 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:29.838 altname enp217s0f1np1 00:32:29.838 altname ens818f1np1 00:32:29.838 inet 192.168.100.9/24 scope global mlx_0_1 00:32:29.838 valid_lft forever preferred_lft forever 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:29.838 192.168.100.9' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:29.838 192.168.100.9' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:29.838 192.168.100.9' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1805682 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1805682 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1805682 ']' 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:29.838 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:29.838 [2024-07-24 07:21:44.206118] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:29.838 [2024-07-24 07:21:44.206209] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.838 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.838 [2024-07-24 07:21:44.354672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:30.095 [2024-07-24 07:21:44.562322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.095 [2024-07-24 07:21:44.562365] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.095 [2024-07-24 07:21:44.562379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.095 [2024-07-24 07:21:44.562389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.095 [2024-07-24 07:21:44.562400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.095 [2024-07-24 07:21:44.562520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:30.095 [2024-07-24 07:21:44.562621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:30.095 [2024-07-24 07:21:44.562717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.095 [2024-07-24 07:21:44.562743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:30.659 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:30.659 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:32:30.659 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:30.659 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:30.659 07:21:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:30.659 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.659 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:30.659 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.659 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:30.659 [2024-07-24 07:21:45.056930] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f5285ba7940) succeed. 00:32:30.659 [2024-07-24 07:21:45.066360] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f5285b63940) succeed. 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.917 07:21:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:30.917 Malloc1 00:32:30.918 [2024-07-24 07:21:45.541352] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:31.175 Malloc2 00:32:31.175 Malloc3 00:32:31.433 Malloc4 00:32:31.433 Malloc5 00:32:31.690 Malloc6 00:32:31.690 Malloc7 00:32:31.690 Malloc8 00:32:31.948 Malloc9 00:32:31.948 Malloc10 00:32:31.948 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.948 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:32:31.948 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:31.948 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1806031 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1806031 /var/tmp/bdevperf.sock 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1806031 ']' 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:32.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.206 { 00:32:32.206 "params": { 00:32:32.206 "name": "Nvme$subsystem", 00:32:32.206 "trtype": "$TEST_TRANSPORT", 00:32:32.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.206 "adrfam": "ipv4", 00:32:32.206 "trsvcid": "$NVMF_PORT", 00:32:32.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.206 "hdgst": ${hdgst:-false}, 00:32:32.206 "ddgst": ${ddgst:-false} 00:32:32.206 }, 00:32:32.206 "method": "bdev_nvme_attach_controller" 00:32:32.206 } 00:32:32.206 EOF 00:32:32.206 )") 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.206 { 00:32:32.206 "params": { 00:32:32.206 "name": "Nvme$subsystem", 00:32:32.206 "trtype": "$TEST_TRANSPORT", 00:32:32.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.206 "adrfam": "ipv4", 00:32:32.206 "trsvcid": "$NVMF_PORT", 00:32:32.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.206 "hdgst": ${hdgst:-false}, 00:32:32.206 "ddgst": ${ddgst:-false} 00:32:32.206 }, 00:32:32.206 "method": "bdev_nvme_attach_controller" 00:32:32.206 } 00:32:32.206 EOF 00:32:32.206 )") 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.206 { 00:32:32.206 "params": { 00:32:32.206 "name": "Nvme$subsystem", 00:32:32.206 "trtype": "$TEST_TRANSPORT", 00:32:32.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.206 "adrfam": "ipv4", 00:32:32.206 "trsvcid": "$NVMF_PORT", 00:32:32.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.206 "hdgst": ${hdgst:-false}, 00:32:32.206 "ddgst": ${ddgst:-false} 00:32:32.206 }, 00:32:32.206 "method": "bdev_nvme_attach_controller" 00:32:32.206 } 00:32:32.206 EOF 00:32:32.206 )") 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.206 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.206 { 00:32:32.206 "params": { 00:32:32.206 "name": "Nvme$subsystem", 00:32:32.207 "trtype": "$TEST_TRANSPORT", 00:32:32.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "$NVMF_PORT", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.207 "hdgst": ${hdgst:-false}, 00:32:32.207 "ddgst": ${ddgst:-false} 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 } 00:32:32.207 EOF 00:32:32.207 )") 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.207 { 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme$subsystem", 00:32:32.207 "trtype": "$TEST_TRANSPORT", 00:32:32.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "$NVMF_PORT", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.207 "hdgst": ${hdgst:-false}, 00:32:32.207 "ddgst": ${ddgst:-false} 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 } 00:32:32.207 EOF 00:32:32.207 )") 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.207 { 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme$subsystem", 00:32:32.207 "trtype": "$TEST_TRANSPORT", 00:32:32.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "$NVMF_PORT", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.207 "hdgst": ${hdgst:-false}, 00:32:32.207 "ddgst": ${ddgst:-false} 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 } 00:32:32.207 EOF 00:32:32.207 )") 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.207 { 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme$subsystem", 00:32:32.207 "trtype": "$TEST_TRANSPORT", 00:32:32.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "$NVMF_PORT", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.207 "hdgst": ${hdgst:-false}, 00:32:32.207 "ddgst": ${ddgst:-false} 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 } 00:32:32.207 EOF 00:32:32.207 )") 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.207 { 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme$subsystem", 00:32:32.207 "trtype": "$TEST_TRANSPORT", 00:32:32.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "$NVMF_PORT", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.207 "hdgst": ${hdgst:-false}, 00:32:32.207 "ddgst": ${ddgst:-false} 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 } 00:32:32.207 EOF 00:32:32.207 )") 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.207 { 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme$subsystem", 00:32:32.207 "trtype": "$TEST_TRANSPORT", 00:32:32.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "$NVMF_PORT", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.207 "hdgst": ${hdgst:-false}, 00:32:32.207 "ddgst": ${ddgst:-false} 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 } 00:32:32.207 EOF 00:32:32.207 )") 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:32.207 { 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme$subsystem", 00:32:32.207 "trtype": "$TEST_TRANSPORT", 00:32:32.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "$NVMF_PORT", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.207 "hdgst": ${hdgst:-false}, 00:32:32.207 "ddgst": ${ddgst:-false} 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 } 00:32:32.207 EOF 00:32:32.207 )") 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:32:32.207 [2024-07-24 07:21:46.670012] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:32.207 [2024-07-24 07:21:46.670107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806031 ] 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:32:32.207 07:21:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme1", 00:32:32.207 "trtype": "rdma", 00:32:32.207 "traddr": "192.168.100.8", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "4420", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:32.207 "hdgst": false, 00:32:32.207 "ddgst": false 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 },{ 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme2", 00:32:32.207 "trtype": "rdma", 00:32:32.207 "traddr": "192.168.100.8", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "4420", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:32.207 "hdgst": false, 00:32:32.207 "ddgst": false 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 },{ 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme3", 00:32:32.207 "trtype": "rdma", 00:32:32.207 "traddr": "192.168.100.8", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "4420", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:32.207 "hdgst": false, 00:32:32.207 "ddgst": false 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 },{ 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme4", 00:32:32.207 "trtype": "rdma", 00:32:32.207 "traddr": "192.168.100.8", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "4420", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:32.207 "hdgst": false, 00:32:32.207 "ddgst": false 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 },{ 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme5", 00:32:32.207 "trtype": "rdma", 00:32:32.207 "traddr": "192.168.100.8", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "4420", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:32.207 "hdgst": false, 00:32:32.207 "ddgst": false 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 },{ 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme6", 00:32:32.207 "trtype": "rdma", 00:32:32.207 "traddr": "192.168.100.8", 00:32:32.207 "adrfam": "ipv4", 00:32:32.207 "trsvcid": "4420", 00:32:32.207 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:32.207 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:32.207 "hdgst": false, 00:32:32.207 "ddgst": false 00:32:32.207 }, 00:32:32.207 "method": "bdev_nvme_attach_controller" 00:32:32.207 },{ 00:32:32.207 "params": { 00:32:32.207 "name": "Nvme7", 00:32:32.208 "trtype": "rdma", 00:32:32.208 "traddr": "192.168.100.8", 00:32:32.208 "adrfam": "ipv4", 00:32:32.208 "trsvcid": "4420", 00:32:32.208 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:32.208 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:32.208 "hdgst": false, 00:32:32.208 "ddgst": false 00:32:32.208 }, 00:32:32.208 "method": "bdev_nvme_attach_controller" 00:32:32.208 },{ 00:32:32.208 "params": { 00:32:32.208 "name": "Nvme8", 00:32:32.208 "trtype": "rdma", 00:32:32.208 "traddr": "192.168.100.8", 00:32:32.208 "adrfam": "ipv4", 00:32:32.208 "trsvcid": "4420", 00:32:32.208 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:32.208 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:32.208 "hdgst": false, 00:32:32.208 "ddgst": false 00:32:32.208 }, 00:32:32.208 "method": "bdev_nvme_attach_controller" 00:32:32.208 },{ 00:32:32.208 "params": { 00:32:32.208 "name": "Nvme9", 00:32:32.208 "trtype": "rdma", 00:32:32.208 "traddr": "192.168.100.8", 00:32:32.208 "adrfam": "ipv4", 00:32:32.208 "trsvcid": "4420", 00:32:32.208 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:32.208 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:32.208 "hdgst": false, 00:32:32.208 "ddgst": false 00:32:32.208 }, 00:32:32.208 "method": "bdev_nvme_attach_controller" 00:32:32.208 },{ 00:32:32.208 "params": { 00:32:32.208 "name": "Nvme10", 00:32:32.208 "trtype": "rdma", 00:32:32.208 "traddr": "192.168.100.8", 00:32:32.208 "adrfam": "ipv4", 00:32:32.208 "trsvcid": "4420", 00:32:32.208 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:32.208 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:32.208 "hdgst": false, 00:32:32.208 "ddgst": false 00:32:32.208 }, 00:32:32.208 "method": "bdev_nvme_attach_controller" 00:32:32.208 }' 00:32:32.208 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.208 [2024-07-24 07:21:46.821889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.465 [2024-07-24 07:21:47.036762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.836 Running I/O for 10 seconds... 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.836 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:34.094 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.094 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:32:34.094 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:32:34.094 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=115 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 115 -ge 100 ']' 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1805682 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1805682 ']' 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1805682 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1805682 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1805682' 00:32:34.351 killing process with pid 1805682 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1805682 00:32:34.351 07:21:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1805682 00:32:35.742 [2024-07-24 07:21:49.960220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.742 [2024-07-24 07:21:49.960282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.742 [2024-07-24 07:21:49.960301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.742 [2024-07-24 07:21:49.960314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.742 [2024-07-24 07:21:49.960328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.742 [2024-07-24 07:21:49.960342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.742 [2024-07-24 07:21:49.960356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.742 [2024-07-24 07:21:49.960369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.742 [2024-07-24 07:21:49.962755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.742 [2024-07-24 07:21:49.962781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:32:35.742 [2024-07-24 07:21:49.962818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.742 [2024-07-24 07:21:49.962837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.742 [2024-07-24 07:21:49.962852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.742 [2024-07-24 07:21:49.962865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.742 [2024-07-24 07:21:49.962879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.962892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.962906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.962918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.964940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.743 [2024-07-24 07:21:49.964962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:32:35.743 [2024-07-24 07:21:49.964986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.965001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.965016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.965028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.965041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.965055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.965068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.965080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.967114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.743 [2024-07-24 07:21:49.967134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:32:35.743 [2024-07-24 07:21:49.967159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.967175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.967190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.967204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.967219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.967234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.967248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.967265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.969301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.743 [2024-07-24 07:21:49.969321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:32:35.743 [2024-07-24 07:21:49.969346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.969362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.969377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.969390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.969405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.969419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.969433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.969447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.971857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.743 [2024-07-24 07:21:49.971878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:32:35.743 [2024-07-24 07:21:49.971903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.971919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.971934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.971948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.971963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.971977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.971992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.972005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.974163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.743 [2024-07-24 07:21:49.974183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.743 [2024-07-24 07:21:49.974208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.974223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.974238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.974252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.974270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.974284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.974299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.974313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.976670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.743 [2024-07-24 07:21:49.976690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:32:35.743 [2024-07-24 07:21:49.976715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.976730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.976746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.976759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.976774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.976802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.976820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.976838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.979235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.743 [2024-07-24 07:21:49.979260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:32:35.743 [2024-07-24 07:21:49.979289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.979309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32753 cdw0:0 sqhd:53e0 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.979327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.979345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32753 cdw0:0 sqhd:53e0 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.979363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.979380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32753 cdw0:0 sqhd:53e0 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.979398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.979415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32753 cdw0:0 sqhd:53e0 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.981705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.743 [2024-07-24 07:21:49.981731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:32:35.743 [2024-07-24 07:21:49.981764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.981784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.981803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.743 [2024-07-24 07:21:49.981821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.743 [2024-07-24 07:21:49.981839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.744 [2024-07-24 07:21:49.981856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:49.981874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.744 [2024-07-24 07:21:49.981891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:49.984404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.744 [2024-07-24 07:21:49.984431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:32:35.744 [2024-07-24 07:21:49.987074] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019956500 was disconnected and freed. reset controller. 00:32:35.744 [2024-07-24 07:21:49.987108] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.744 [2024-07-24 07:21:49.989613] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019956240 was disconnected and freed. reset controller. 00:32:35.744 [2024-07-24 07:21:49.989650] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.744 [2024-07-24 07:21:49.991910] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e680 was disconnected and freed. reset controller. 00:32:35.744 [2024-07-24 07:21:49.991939] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.744 [2024-07-24 07:21:49.994526] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e3c0 was disconnected and freed. reset controller. 00:32:35.744 [2024-07-24 07:21:49.994553] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.744 [2024-07-24 07:21:49.996929] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e100 was disconnected and freed. reset controller. 00:32:35.744 [2024-07-24 07:21:49.996955] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.744 [2024-07-24 07:21:49.999228] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60de40 was disconnected and freed. reset controller. 00:32:35.744 [2024-07-24 07:21:49.999255] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.744 [2024-07-24 07:21:50.001320] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60db80 was disconnected and freed. reset controller. 00:32:35.744 [2024-07-24 07:21:50.001346] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.744 [2024-07-24 07:21:50.004059] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60d8c0 was disconnected and freed. reset controller. 00:32:35.744 [2024-07-24 07:21:50.004086] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.744 [2024-07-24 07:21:50.004170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f300 len:0x10000 key:0x183b00 00:32:35.744 [2024-07-24 07:21:50.004199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1effc0 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff00 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cfe40 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfd80 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afcc0 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fc00 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fb40 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fa80 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16f9c0 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15f900 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14f840 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13f780 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12f6c0 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f600 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f540 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff480 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef3c0 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.004969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df300 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.004988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.005011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf240 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.005032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.005056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf180 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.005076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.005099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af0c0 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.005117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.005141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f000 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.005159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.005183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08ef40 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.005203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.005227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07ee80 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.005246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.005269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06edc0 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.005289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.744 [2024-07-24 07:21:50.005312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05ed00 len:0x10000 key:0x183100 00:32:35.744 [2024-07-24 07:21:50.005330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04ec40 len:0x10000 key:0x183100 00:32:35.745 [2024-07-24 07:21:50.005371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03eb80 len:0x10000 key:0x183100 00:32:35.745 [2024-07-24 07:21:50.005414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02eac0 len:0x10000 key:0x183100 00:32:35.745 [2024-07-24 07:21:50.005457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01ea00 len:0x10000 key:0x183100 00:32:35.745 [2024-07-24 07:21:50.005498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00e940 len:0x10000 key:0x183100 00:32:35.745 [2024-07-24 07:21:50.005539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3effc0 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff00 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cfe40 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfd80 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afcc0 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fc00 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fb40 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fa80 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36f9c0 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35f900 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.005974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.005996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34f840 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33f780 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32f6c0 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f600 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f540 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff480 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef3c0 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df300 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf240 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf180 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af0c0 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f000 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28ef40 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27ee80 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26edc0 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25ed00 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24ec40 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23eb80 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22eac0 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21ea00 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.745 [2024-07-24 07:21:50.006919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20e940 len:0x10000 key:0x183700 00:32:35.745 [2024-07-24 07:21:50.006937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.006961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae1f3c0 len:0x10000 key:0x183b00 00:32:35.746 [2024-07-24 07:21:50.006980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.009947] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60d600 was disconnected and freed. reset controller. 00:32:35.746 [2024-07-24 07:21:50.009974] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.746 [2024-07-24 07:21:50.010002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfcc0 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfc00 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfb40 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afa80 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49f9c0 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48f900 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47f840 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46f780 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f6c0 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f600 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f540 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f480 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f3c0 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f300 len:0x10000 key:0x182f00 00:32:35.746 [2024-07-24 07:21:50.010592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7effc0 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.010689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff00 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.010735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cfe40 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.010779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfd80 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.010821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afcc0 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.010863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fc00 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.010904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fb40 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.010947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.010970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fa80 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.010988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76f9c0 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75f900 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74f840 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73f780 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72f6c0 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f600 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f540 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff480 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef3c0 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df300 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf240 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf180 len:0x10000 key:0x183f00 00:32:35.746 [2024-07-24 07:21:50.011508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.746 [2024-07-24 07:21:50.011532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af0c0 len:0x10000 key:0x183f00 00:32:35.747 [2024-07-24 07:21:50.011551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.747 [2024-07-24 07:21:50.011574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f000 len:0x10000 key:0x183f00 00:32:35.747 [2024-07-24 07:21:50.011593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.747 [2024-07-24 07:21:50.011617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68ef40 len:0x10000 key:0x183f00 00:32:35.747 [2024-07-24 07:21:50.011646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.747 [2024-07-24 07:21:50.011669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67ee80 len:0x10000 key:0x183f00 00:32:35.747 [2024-07-24 07:21:50.011687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.747 [2024-07-24 07:21:50.011711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66edc0 len:0x10000 key:0x183f00 00:32:35.747 [2024-07-24 07:21:50.011732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.747 [2024-07-24 07:21:50.011756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65ed00 len:0x10000 key:0x183f00 00:32:35.747 [2024-07-24 07:21:50.011775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.747 [2024-07-24 07:21:50.011799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64ec40 len:0x10000 key:0x183f00 00:32:35.747 [2024-07-24 07:21:50.011817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.747 [2024-07-24 07:21:50.011842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63eb80 len:0x10000 key:0x183f00 00:32:35.747 [2024-07-24 07:21:50.011861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.747 [2024-07-24 07:21:50.011885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62eac0 len:0x10000 key:0x183f00 00:32:35.747 [2024-07-24 07:21:50.011903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.011926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61ea00 len:0x10000 key:0x183f00 00:32:35.748 [2024-07-24 07:21:50.011945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.011969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60e940 len:0x10000 key:0x183f00 00:32:35.748 [2024-07-24 07:21:50.011988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9effc0 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff00 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cfe40 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfd80 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afcc0 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fc00 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fb40 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fa80 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96f9c0 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95f900 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94f840 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93f780 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92f6c0 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f600 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f540 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff480 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef3c0 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df300 len:0x10000 key:0x183200 00:32:35.748 [2024-07-24 07:21:50.012778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.012803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4efd80 len:0x10000 key:0x182f00 00:32:35.748 [2024-07-24 07:21:50.012822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.748 [2024-07-24 07:21:50.049541] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60d340 was disconnected and freed. reset controller. 00:32:35.748 [2024-07-24 07:21:50.049570] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049670] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049694] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049712] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049727] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049742] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049757] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049772] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049787] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049803] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.049818] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:35.748 [2024-07-24 07:21:50.056836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.748 [2024-07-24 07:21:50.056871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:32:35.748 [2024-07-24 07:21:50.057816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:32:35.748 [2024-07-24 07:21:50.057848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:32:35.748 [2024-07-24 07:21:50.057864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:32:35.748 [2024-07-24 07:21:50.057880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:32:35.748 [2024-07-24 07:21:50.061011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:32:35.748 task offset: 30720 on job bdev=Nvme1n1 fails 00:32:35.748 00:32:35.748 Latency(us) 00:32:35.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.748 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.748 Job: Nvme1n1 ended in about 1.79 seconds with error 00:32:35.748 Verification LBA range: start 0x0 length 0x400 00:32:35.748 Nvme1n1 : 1.79 120.34 7.52 35.66 0.00 407099.15 8283.75 1067030.94 00:32:35.748 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.748 Job: Nvme2n1 ended in about 1.80 seconds with error 00:32:35.748 Verification LBA range: start 0x0 length 0x400 00:32:35.748 Nvme2n1 : 1.80 119.73 7.48 35.64 0.00 404947.55 12163.48 1060320.05 00:32:35.748 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.748 Job: Nvme3n1 ended in about 1.80 seconds with error 00:32:35.748 Verification LBA range: start 0x0 length 0x400 00:32:35.748 Nvme3n1 : 1.80 124.68 7.79 35.62 0.00 388756.28 18035.51 1060320.05 00:32:35.748 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.748 Job: Nvme4n1 ended in about 1.80 seconds with error 00:32:35.748 Verification LBA range: start 0x0 length 0x400 00:32:35.748 Nvme4n1 : 1.80 124.62 7.79 35.61 0.00 385216.97 5662.31 1060320.05 00:32:35.748 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.748 Job: Nvme5n1 ended in about 1.80 seconds with error 00:32:35.748 Verification LBA range: start 0x0 length 0x400 00:32:35.748 Nvme5n1 : 1.80 115.66 7.23 35.59 0.00 403970.07 33135.00 1060320.05 00:32:35.748 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.748 Job: Nvme6n1 ended in about 1.80 seconds with error 00:32:35.748 Verification LBA range: start 0x0 length 0x400 00:32:35.749 Nvme6n1 : 1.80 116.72 7.29 35.57 0.00 397203.65 37748.74 1053609.16 00:32:35.749 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.749 Job: Nvme7n1 ended in about 1.80 seconds with error 00:32:35.749 Verification LBA range: start 0x0 length 0x400 00:32:35.749 Nvme7n1 : 1.80 124.44 7.78 35.55 0.00 374448.67 44669.34 1053609.16 00:32:35.749 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.749 Job: Nvme8n1 ended in about 1.80 seconds with error 00:32:35.749 Verification LBA range: start 0x0 length 0x400 00:32:35.749 Nvme8n1 : 1.80 121.05 7.57 35.54 0.00 378640.79 53057.95 1053609.16 00:32:35.749 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.749 Job: Nvme9n1 ended in about 1.75 seconds with error 00:32:35.749 Verification LBA range: start 0x0 length 0x400 00:32:35.749 Nvme9n1 : 1.75 109.59 6.85 36.53 0.00 402578.64 62495.13 1080452.71 00:32:35.749 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.749 Job: Nvme10n1 ended in about 1.76 seconds with error 00:32:35.749 Verification LBA range: start 0x0 length 0x400 00:32:35.749 Nvme10n1 : 1.76 72.82 4.55 36.41 0.00 533061.09 63333.99 1073741.82 00:32:35.749 =================================================================================================================== 00:32:35.749 Total : 1149.66 71.85 357.72 0.00 403338.96 5662.31 1080452.71 00:32:35.749 [2024-07-24 07:21:50.146481] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:35.749 [2024-07-24 07:21:50.146546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:32:35.749 [2024-07-24 07:21:50.146576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:32:35.749 [2024-07-24 07:21:50.146600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:32:35.749 [2024-07-24 07:21:50.154620] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.154659] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.154672] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff800 00:32:35.749 [2024-07-24 07:21:50.154839] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.154855] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.154866] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff240 00:32:35.749 [2024-07-24 07:21:50.159353] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.159378] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.159390] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199bb200 00:32:35.749 [2024-07-24 07:21:50.159511] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.159529] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.159543] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199d3dc0 00:32:35.749 [2024-07-24 07:21:50.159642] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.159660] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.159674] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199be180 00:32:35.749 [2024-07-24 07:21:50.159768] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.159786] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.159799] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199c7500 00:32:35.749 [2024-07-24 07:21:50.160728] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.160753] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.160768] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019983a80 00:32:35.749 [2024-07-24 07:21:50.160860] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.160879] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.160893] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199a3bc0 00:32:35.749 [2024-07-24 07:21:50.160973] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.160991] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.161004] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001998e680 00:32:35.749 [2024-07-24 07:21:50.161078] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.749 [2024-07-24 07:21:50.161096] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.749 [2024-07-24 07:21:50.161109] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001998ee00 00:32:36.681 [2024-07-24 07:21:51.159270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.681 [2024-07-24 07:21:51.159320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.681 [2024-07-24 07:21:51.160970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.681 [2024-07-24 07:21:51.160990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:32:36.681 [2024-07-24 07:21:51.161045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.681 [2024-07-24 07:21:51.161060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.681 [2024-07-24 07:21:51.161074] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:36.681 [2024-07-24 07:21:51.161099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:32:36.681 [2024-07-24 07:21:51.161110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:32:36.681 [2024-07-24 07:21:51.161122] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] already in failed state 00:32:36.681 [2024-07-24 07:21:51.161160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.682 [2024-07-24 07:21:51.161182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.682 [2024-07-24 07:21:51.163781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.682 [2024-07-24 07:21:51.163804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:32:36.682 [2024-07-24 07:21:51.165134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.682 [2024-07-24 07:21:51.165151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:32:36.682 [2024-07-24 07:21:51.166710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.682 [2024-07-24 07:21:51.166727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:32:36.682 [2024-07-24 07:21:51.167849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.682 [2024-07-24 07:21:51.167866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:32:36.682 [2024-07-24 07:21:51.169135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.682 [2024-07-24 07:21:51.169151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:32:36.682 [2024-07-24 07:21:51.170437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.682 [2024-07-24 07:21:51.170454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:32:36.682 [2024-07-24 07:21:51.171708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.682 [2024-07-24 07:21:51.171729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:32:36.682 [2024-07-24 07:21:51.172964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.682 [2024-07-24 07:21:51.172985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:32:36.682 [2024-07-24 07:21:51.173000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:32:36.682 [2024-07-24 07:21:51.173015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:32:36.682 [2024-07-24 07:21:51.173030] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] already in failed state 00:32:36.682 [2024-07-24 07:21:51.173053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:32:36.682 [2024-07-24 07:21:51.173069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:32:36.682 [2024-07-24 07:21:51.173084] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] already in failed state 00:32:36.682 [2024-07-24 07:21:51.173106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:32:36.682 [2024-07-24 07:21:51.173121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:32:36.682 [2024-07-24 07:21:51.173136] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] already in failed state 00:32:36.682 [2024-07-24 07:21:51.173154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:32:36.682 [2024-07-24 07:21:51.173168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:32:36.682 [2024-07-24 07:21:51.173183] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] already in failed state 00:32:36.682 [2024-07-24 07:21:51.173380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.682 [2024-07-24 07:21:51.173403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.682 [2024-07-24 07:21:51.173421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.682 [2024-07-24 07:21:51.173438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.682 [2024-07-24 07:21:51.173456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:32:36.682 [2024-07-24 07:21:51.173472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:32:36.682 [2024-07-24 07:21:51.173487] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] already in failed state 00:32:36.682 [2024-07-24 07:21:51.173506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:32:36.682 [2024-07-24 07:21:51.173521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:32:36.682 [2024-07-24 07:21:51.173535] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] already in failed state 00:32:36.682 [2024-07-24 07:21:51.173553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:32:36.682 [2024-07-24 07:21:51.173568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:32:36.682 [2024-07-24 07:21:51.173582] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] already in failed state 00:32:36.682 [2024-07-24 07:21:51.173600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:32:36.682 [2024-07-24 07:21:51.173615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:32:36.682 [2024-07-24 07:21:51.173635] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] already in failed state 00:32:36.682 [2024-07-24 07:21:51.173719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.682 [2024-07-24 07:21:51.173739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.682 [2024-07-24 07:21:51.173756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.682 [2024-07-24 07:21:51.173773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.053 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:32:38.053 07:21:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1806031 00:32:39.429 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1806031) - No such process 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:39.429 rmmod nvme_rdma 00:32:39.429 rmmod nvme_fabrics 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:39.429 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:39.430 00:32:39.430 real 0m9.864s 00:32:39.430 user 0m35.415s 00:32:39.430 sys 0m1.902s 00:32:39.430 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.430 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:39.430 ************************************ 00:32:39.430 END TEST nvmf_shutdown_tc3 00:32:39.430 ************************************ 00:32:39.430 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:32:39.430 00:32:39.430 real 0m42.979s 00:32:39.430 user 2m13.765s 00:32:39.430 sys 0m11.679s 00:32:39.430 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.430 07:21:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:39.430 ************************************ 00:32:39.430 END TEST nvmf_shutdown 00:32:39.430 ************************************ 00:32:39.430 07:21:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:32:39.430 00:32:39.430 real 18m25.859s 00:32:39.430 user 53m16.111s 00:32:39.430 sys 3m30.682s 00:32:39.430 07:21:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.430 07:21:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:39.430 ************************************ 00:32:39.430 END TEST nvmf_target_extra 00:32:39.430 ************************************ 00:32:39.430 07:21:53 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:32:39.430 07:21:53 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:39.430 07:21:53 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.430 07:21:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:39.430 ************************************ 00:32:39.430 START TEST nvmf_host 00:32:39.430 ************************************ 00:32:39.430 07:21:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:32:39.430 * Looking for test storage... 00:32:39.430 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.430 07:21:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.687 ************************************ 00:32:39.687 START TEST nvmf_multicontroller 00:32:39.687 ************************************ 00:32:39.687 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:32:39.687 * Looking for test storage... 00:32:39.687 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:39.687 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.687 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:32:39.687 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.687 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.687 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:32:39.688 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:32:39.688 00:32:39.688 real 0m0.136s 00:32:39.688 user 0m0.061s 00:32:39.688 sys 0m0.082s 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:39.688 ************************************ 00:32:39.688 END TEST nvmf_multicontroller 00:32:39.688 ************************************ 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.688 ************************************ 00:32:39.688 START TEST nvmf_aer 00:32:39.688 ************************************ 00:32:39.688 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:32:39.946 * Looking for test storage... 00:32:39.946 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:32:39.946 07:21:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:48.049 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:48.050 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:48.050 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:48.050 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:48.050 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:32:48.050 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:48.050 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:48.050 altname enp217s0f0np0 00:32:48.050 altname ens818f0np0 00:32:48.050 inet 192.168.100.8/24 scope global mlx_0_0 00:32:48.050 valid_lft forever preferred_lft forever 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:32:48.050 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:48.050 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:48.050 altname enp217s0f1np1 00:32:48.050 altname ens818f1np1 00:32:48.050 inet 192.168.100.9/24 scope global mlx_0_1 00:32:48.050 valid_lft forever preferred_lft forever 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:48.050 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:48.051 192.168.100.9' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:48.051 192.168.100.9' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:48.051 192.168.100.9' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1811375 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1811375 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1811375 ']' 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:48.051 07:22:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.051 [2024-07-24 07:22:02.040149] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:48.051 [2024-07-24 07:22:02.040252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.051 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.051 [2024-07-24 07:22:02.184001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:48.051 [2024-07-24 07:22:02.381574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.051 [2024-07-24 07:22:02.381628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.051 [2024-07-24 07:22:02.381643] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:48.051 [2024-07-24 07:22:02.381654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:48.051 [2024-07-24 07:22:02.381666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.051 [2024-07-24 07:22:02.381786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.051 [2024-07-24 07:22:02.381860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:48.051 [2024-07-24 07:22:02.381916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.051 [2024-07-24 07:22:02.381928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:48.308 07:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:48.308 07:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:32:48.308 07:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:48.308 07:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:48.308 07:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.308 07:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.308 07:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:48.308 07:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.308 07:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.308 [2024-07-24 07:22:02.886221] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fc378d48940) succeed. 00:32:48.308 [2024-07-24 07:22:02.895825] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fc378d04940) succeed. 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.873 Malloc0 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.873 [2024-07-24 07:22:03.315411] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:48.873 [ 00:32:48.873 { 00:32:48.873 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:48.873 "subtype": "Discovery", 00:32:48.873 "listen_addresses": [], 00:32:48.873 "allow_any_host": true, 00:32:48.873 "hosts": [] 00:32:48.873 }, 00:32:48.873 { 00:32:48.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:48.873 "subtype": "NVMe", 00:32:48.873 "listen_addresses": [ 00:32:48.873 { 00:32:48.873 "trtype": "RDMA", 00:32:48.873 "adrfam": "IPv4", 00:32:48.873 "traddr": "192.168.100.8", 00:32:48.873 "trsvcid": "4420" 00:32:48.873 } 00:32:48.873 ], 00:32:48.873 "allow_any_host": true, 00:32:48.873 "hosts": [], 00:32:48.873 "serial_number": "SPDK00000000000001", 00:32:48.873 "model_number": "SPDK bdev Controller", 00:32:48.873 "max_namespaces": 2, 00:32:48.873 "min_cntlid": 1, 00:32:48.873 "max_cntlid": 65519, 00:32:48.873 "namespaces": [ 00:32:48.873 { 00:32:48.873 "nsid": 1, 00:32:48.873 "bdev_name": "Malloc0", 00:32:48.873 "name": "Malloc0", 00:32:48.873 "nguid": "73B6DB2230E64D5091CB5891565ED54A", 00:32:48.873 "uuid": "73b6db22-30e6-4d50-91cb-5891565ed54a" 00:32:48.873 } 00:32:48.873 ] 00:32:48.873 } 00:32:48.873 ] 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1811554 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:32:48.873 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:32:48.873 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.130 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:49.130 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 2 -lt 200 ']' 00:32:49.130 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=3 00:32:49.130 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:32:49.130 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:49.130 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:49.130 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:32:49.131 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:32:49.131 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.131 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:49.387 Malloc1 00:32:49.387 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.387 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:32:49.387 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.387 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:49.387 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.388 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:32:49.388 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.388 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:49.388 [ 00:32:49.388 { 00:32:49.388 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:49.388 "subtype": "Discovery", 00:32:49.388 "listen_addresses": [], 00:32:49.388 "allow_any_host": true, 00:32:49.388 "hosts": [] 00:32:49.388 }, 00:32:49.388 { 00:32:49.388 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:49.388 "subtype": "NVMe", 00:32:49.388 "listen_addresses": [ 00:32:49.388 { 00:32:49.388 "trtype": "RDMA", 00:32:49.388 "adrfam": "IPv4", 00:32:49.388 "traddr": "192.168.100.8", 00:32:49.388 "trsvcid": "4420" 00:32:49.388 } 00:32:49.388 ], 00:32:49.388 "allow_any_host": true, 00:32:49.388 "hosts": [], 00:32:49.388 "serial_number": "SPDK00000000000001", 00:32:49.388 "model_number": "SPDK bdev Controller", 00:32:49.388 "max_namespaces": 2, 00:32:49.388 "min_cntlid": 1, 00:32:49.388 "max_cntlid": 65519, 00:32:49.388 "namespaces": [ 00:32:49.388 { 00:32:49.388 "nsid": 1, 00:32:49.388 "bdev_name": "Malloc0", 00:32:49.388 "name": "Malloc0", 00:32:49.388 "nguid": "73B6DB2230E64D5091CB5891565ED54A", 00:32:49.388 "uuid": "73b6db22-30e6-4d50-91cb-5891565ed54a" 00:32:49.388 }, 00:32:49.388 { 00:32:49.388 "nsid": 2, 00:32:49.388 "bdev_name": "Malloc1", 00:32:49.388 "name": "Malloc1", 00:32:49.388 "nguid": "A9D68AD834294F8291FC826E9D6680D3", 00:32:49.388 "uuid": "a9d68ad8-3429-4f82-91fc-826e9d6680d3" 00:32:49.388 } 00:32:49.388 ] 00:32:49.388 } 00:32:49.388 ] 00:32:49.388 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.388 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1811554 00:32:49.388 Asynchronous Event Request test 00:32:49.388 Attaching to 192.168.100.8 00:32:49.388 Attached to 192.168.100.8 00:32:49.388 Registering asynchronous event callbacks... 00:32:49.388 Starting namespace attribute notice tests for all controllers... 00:32:49.388 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:32:49.388 aer_cb - Changed Namespace 00:32:49.388 Cleaning up... 00:32:49.388 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:49.388 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.388 07:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:49.644 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.644 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:32:49.644 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.644 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:49.902 rmmod nvme_rdma 00:32:49.902 rmmod nvme_fabrics 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1811375 ']' 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1811375 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1811375 ']' 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1811375 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1811375 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1811375' 00:32:49.902 killing process with pid 1811375 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1811375 00:32:49.902 07:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1811375 00:32:51.798 07:22:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:51.798 07:22:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:51.798 00:32:51.798 real 0m12.003s 00:32:51.798 user 0m15.338s 00:32:51.798 sys 0m6.456s 00:32:51.798 07:22:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:51.798 07:22:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.798 ************************************ 00:32:51.798 END TEST nvmf_aer 00:32:51.798 ************************************ 00:32:51.798 07:22:06 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:32:51.798 07:22:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:51.798 07:22:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:51.798 07:22:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.798 ************************************ 00:32:51.798 START TEST nvmf_async_init 00:32:51.798 ************************************ 00:32:51.798 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:32:52.060 * Looking for test storage... 00:32:52.060 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.060 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ee0cdb1063b24f11a7021422234bd53c 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:32:52.061 07:22:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:00.191 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:00.191 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:00.191 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:00.191 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:00.191 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:00.192 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:00.192 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:00.192 altname enp217s0f0np0 00:33:00.192 altname ens818f0np0 00:33:00.192 inet 192.168.100.8/24 scope global mlx_0_0 00:33:00.192 valid_lft forever preferred_lft forever 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:00.192 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:00.192 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:00.192 altname enp217s0f1np1 00:33:00.192 altname ens818f1np1 00:33:00.192 inet 192.168.100.9/24 scope global mlx_0_1 00:33:00.192 valid_lft forever preferred_lft forever 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:00.192 192.168.100.9' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:00.192 192.168.100.9' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:00.192 192.168.100.9' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1815970 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1815970 00:33:00.192 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1815970 ']' 00:33:00.193 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.193 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:00.193 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.193 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:00.193 07:22:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:00.193 [2024-07-24 07:22:14.793502] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:33:00.193 [2024-07-24 07:22:14.793599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.455 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.455 [2024-07-24 07:22:14.942601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.714 [2024-07-24 07:22:15.139337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.714 [2024-07-24 07:22:15.139382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.714 [2024-07-24 07:22:15.139396] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.714 [2024-07-24 07:22:15.139409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.714 [2024-07-24 07:22:15.139420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.714 [2024-07-24 07:22:15.139455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.973 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:00.973 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:33:00.973 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:00.973 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:00.973 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:00.973 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.232 [2024-07-24 07:22:15.635134] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f528d0a4940) succeed. 00:33:01.232 [2024-07-24 07:22:15.644054] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f528d05d940) succeed. 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.232 null0 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.232 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ee0cdb1063b24f11a7021422234bd53c 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.233 [2024-07-24 07:22:15.767322] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.233 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.492 nvme0n1 00:33:01.492 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.492 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:01.492 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.492 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.492 [ 00:33:01.492 { 00:33:01.492 "name": "nvme0n1", 00:33:01.492 "aliases": [ 00:33:01.492 "ee0cdb10-63b2-4f11-a702-1422234bd53c" 00:33:01.492 ], 00:33:01.492 "product_name": "NVMe disk", 00:33:01.492 "block_size": 512, 00:33:01.492 "num_blocks": 2097152, 00:33:01.492 "uuid": "ee0cdb10-63b2-4f11-a702-1422234bd53c", 00:33:01.492 "assigned_rate_limits": { 00:33:01.492 "rw_ios_per_sec": 0, 00:33:01.492 "rw_mbytes_per_sec": 0, 00:33:01.492 "r_mbytes_per_sec": 0, 00:33:01.492 "w_mbytes_per_sec": 0 00:33:01.492 }, 00:33:01.492 "claimed": false, 00:33:01.492 "zoned": false, 00:33:01.492 "supported_io_types": { 00:33:01.492 "read": true, 00:33:01.492 "write": true, 00:33:01.492 "unmap": false, 00:33:01.492 "flush": true, 00:33:01.492 "reset": true, 00:33:01.492 "nvme_admin": true, 00:33:01.492 "nvme_io": true, 00:33:01.492 "nvme_io_md": false, 00:33:01.492 "write_zeroes": true, 00:33:01.492 "zcopy": false, 00:33:01.493 "get_zone_info": false, 00:33:01.493 "zone_management": false, 00:33:01.493 "zone_append": false, 00:33:01.493 "compare": true, 00:33:01.493 "compare_and_write": true, 00:33:01.493 "abort": true, 00:33:01.493 "seek_hole": false, 00:33:01.493 "seek_data": false, 00:33:01.493 "copy": true, 00:33:01.493 "nvme_iov_md": false 00:33:01.493 }, 00:33:01.493 "memory_domains": [ 00:33:01.493 { 00:33:01.493 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:33:01.493 "dma_device_type": 0 00:33:01.493 } 00:33:01.493 ], 00:33:01.493 "driver_specific": { 00:33:01.493 "nvme": [ 00:33:01.493 { 00:33:01.493 "trid": { 00:33:01.493 "trtype": "RDMA", 00:33:01.493 "adrfam": "IPv4", 00:33:01.493 "traddr": "192.168.100.8", 00:33:01.493 "trsvcid": "4420", 00:33:01.493 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:01.493 }, 00:33:01.493 "ctrlr_data": { 00:33:01.493 "cntlid": 1, 00:33:01.493 "vendor_id": "0x8086", 00:33:01.493 "model_number": "SPDK bdev Controller", 00:33:01.493 "serial_number": "00000000000000000000", 00:33:01.493 "firmware_revision": "24.09", 00:33:01.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.493 "oacs": { 00:33:01.493 "security": 0, 00:33:01.493 "format": 0, 00:33:01.493 "firmware": 0, 00:33:01.493 "ns_manage": 0 00:33:01.493 }, 00:33:01.493 "multi_ctrlr": true, 00:33:01.493 "ana_reporting": false 00:33:01.493 }, 00:33:01.493 "vs": { 00:33:01.493 "nvme_version": "1.3" 00:33:01.493 }, 00:33:01.493 "ns_data": { 00:33:01.493 "id": 1, 00:33:01.493 "can_share": true 00:33:01.493 } 00:33:01.493 } 00:33:01.493 ], 00:33:01.493 "mp_policy": "active_passive" 00:33:01.493 } 00:33:01.493 } 00:33:01.493 ] 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 [2024-07-24 07:22:15.895646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:01.493 [2024-07-24 07:22:15.924840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:01.493 [2024-07-24 07:22:15.948273] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 [ 00:33:01.493 { 00:33:01.493 "name": "nvme0n1", 00:33:01.493 "aliases": [ 00:33:01.493 "ee0cdb10-63b2-4f11-a702-1422234bd53c" 00:33:01.493 ], 00:33:01.493 "product_name": "NVMe disk", 00:33:01.493 "block_size": 512, 00:33:01.493 "num_blocks": 2097152, 00:33:01.493 "uuid": "ee0cdb10-63b2-4f11-a702-1422234bd53c", 00:33:01.493 "assigned_rate_limits": { 00:33:01.493 "rw_ios_per_sec": 0, 00:33:01.493 "rw_mbytes_per_sec": 0, 00:33:01.493 "r_mbytes_per_sec": 0, 00:33:01.493 "w_mbytes_per_sec": 0 00:33:01.493 }, 00:33:01.493 "claimed": false, 00:33:01.493 "zoned": false, 00:33:01.493 "supported_io_types": { 00:33:01.493 "read": true, 00:33:01.493 "write": true, 00:33:01.493 "unmap": false, 00:33:01.493 "flush": true, 00:33:01.493 "reset": true, 00:33:01.493 "nvme_admin": true, 00:33:01.493 "nvme_io": true, 00:33:01.493 "nvme_io_md": false, 00:33:01.493 "write_zeroes": true, 00:33:01.493 "zcopy": false, 00:33:01.493 "get_zone_info": false, 00:33:01.493 "zone_management": false, 00:33:01.493 "zone_append": false, 00:33:01.493 "compare": true, 00:33:01.493 "compare_and_write": true, 00:33:01.493 "abort": true, 00:33:01.493 "seek_hole": false, 00:33:01.493 "seek_data": false, 00:33:01.493 "copy": true, 00:33:01.493 "nvme_iov_md": false 00:33:01.493 }, 00:33:01.493 "memory_domains": [ 00:33:01.493 { 00:33:01.493 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:33:01.493 "dma_device_type": 0 00:33:01.493 } 00:33:01.493 ], 00:33:01.493 "driver_specific": { 00:33:01.493 "nvme": [ 00:33:01.493 { 00:33:01.493 "trid": { 00:33:01.493 "trtype": "RDMA", 00:33:01.493 "adrfam": "IPv4", 00:33:01.493 "traddr": "192.168.100.8", 00:33:01.493 "trsvcid": "4420", 00:33:01.493 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:01.493 }, 00:33:01.493 "ctrlr_data": { 00:33:01.493 "cntlid": 2, 00:33:01.493 "vendor_id": "0x8086", 00:33:01.493 "model_number": "SPDK bdev Controller", 00:33:01.493 "serial_number": "00000000000000000000", 00:33:01.493 "firmware_revision": "24.09", 00:33:01.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.493 "oacs": { 00:33:01.493 "security": 0, 00:33:01.493 "format": 0, 00:33:01.493 "firmware": 0, 00:33:01.493 "ns_manage": 0 00:33:01.493 }, 00:33:01.493 "multi_ctrlr": true, 00:33:01.493 "ana_reporting": false 00:33:01.493 }, 00:33:01.493 "vs": { 00:33:01.493 "nvme_version": "1.3" 00:33:01.493 }, 00:33:01.493 "ns_data": { 00:33:01.493 "id": 1, 00:33:01.493 "can_share": true 00:33:01.493 } 00:33:01.493 } 00:33:01.493 ], 00:33:01.493 "mp_policy": "active_passive" 00:33:01.493 } 00:33:01.493 } 00:33:01.493 ] 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 07:22:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.YvYqkbDNgW 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.YvYqkbDNgW 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 [2024-07-24 07:22:16.044211] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YvYqkbDNgW 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YvYqkbDNgW 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 [2024-07-24 07:22:16.060243] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:01.753 nvme0n1 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.753 [ 00:33:01.753 { 00:33:01.753 "name": "nvme0n1", 00:33:01.753 "aliases": [ 00:33:01.753 "ee0cdb10-63b2-4f11-a702-1422234bd53c" 00:33:01.753 ], 00:33:01.753 "product_name": "NVMe disk", 00:33:01.753 "block_size": 512, 00:33:01.753 "num_blocks": 2097152, 00:33:01.753 "uuid": "ee0cdb10-63b2-4f11-a702-1422234bd53c", 00:33:01.753 "assigned_rate_limits": { 00:33:01.753 "rw_ios_per_sec": 0, 00:33:01.753 "rw_mbytes_per_sec": 0, 00:33:01.753 "r_mbytes_per_sec": 0, 00:33:01.753 "w_mbytes_per_sec": 0 00:33:01.753 }, 00:33:01.753 "claimed": false, 00:33:01.753 "zoned": false, 00:33:01.753 "supported_io_types": { 00:33:01.753 "read": true, 00:33:01.753 "write": true, 00:33:01.753 "unmap": false, 00:33:01.753 "flush": true, 00:33:01.753 "reset": true, 00:33:01.753 "nvme_admin": true, 00:33:01.753 "nvme_io": true, 00:33:01.753 "nvme_io_md": false, 00:33:01.753 "write_zeroes": true, 00:33:01.753 "zcopy": false, 00:33:01.753 "get_zone_info": false, 00:33:01.753 "zone_management": false, 00:33:01.753 "zone_append": false, 00:33:01.753 "compare": true, 00:33:01.753 "compare_and_write": true, 00:33:01.753 "abort": true, 00:33:01.753 "seek_hole": false, 00:33:01.753 "seek_data": false, 00:33:01.753 "copy": true, 00:33:01.753 "nvme_iov_md": false 00:33:01.753 }, 00:33:01.753 "memory_domains": [ 00:33:01.753 { 00:33:01.753 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:33:01.753 "dma_device_type": 0 00:33:01.753 } 00:33:01.753 ], 00:33:01.753 "driver_specific": { 00:33:01.753 "nvme": [ 00:33:01.753 { 00:33:01.753 "trid": { 00:33:01.753 "trtype": "RDMA", 00:33:01.753 "adrfam": "IPv4", 00:33:01.753 "traddr": "192.168.100.8", 00:33:01.753 "trsvcid": "4421", 00:33:01.753 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:01.753 }, 00:33:01.753 "ctrlr_data": { 00:33:01.753 "cntlid": 3, 00:33:01.753 "vendor_id": "0x8086", 00:33:01.753 "model_number": "SPDK bdev Controller", 00:33:01.753 "serial_number": "00000000000000000000", 00:33:01.753 "firmware_revision": "24.09", 00:33:01.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.753 "oacs": { 00:33:01.753 "security": 0, 00:33:01.753 "format": 0, 00:33:01.753 "firmware": 0, 00:33:01.753 "ns_manage": 0 00:33:01.753 }, 00:33:01.753 "multi_ctrlr": true, 00:33:01.753 "ana_reporting": false 00:33:01.753 }, 00:33:01.753 "vs": { 00:33:01.753 "nvme_version": "1.3" 00:33:01.753 }, 00:33:01.753 "ns_data": { 00:33:01.753 "id": 1, 00:33:01.753 "can_share": true 00:33:01.753 } 00:33:01.753 } 00:33:01.753 ], 00:33:01.753 "mp_policy": "active_passive" 00:33:01.753 } 00:33:01.753 } 00:33:01.753 ] 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.YvYqkbDNgW 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:01.753 rmmod nvme_rdma 00:33:01.753 rmmod nvme_fabrics 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1815970 ']' 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1815970 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1815970 ']' 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1815970 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1815970 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1815970' 00:33:01.753 killing process with pid 1815970 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1815970 00:33:01.753 07:22:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1815970 00:33:03.132 07:22:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:03.133 00:33:03.133 real 0m11.109s 00:33:03.133 user 0m5.173s 00:33:03.133 sys 0m6.719s 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.133 ************************************ 00:33:03.133 END TEST nvmf_async_init 00:33:03.133 ************************************ 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.133 ************************************ 00:33:03.133 START TEST dma 00:33:03.133 ************************************ 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:33:03.133 * Looking for test storage... 00:33:03.133 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:33:03.133 07:22:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # net_devs=() 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # e810=() 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # local -ga e810 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # x722=() 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # local -ga x722 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # mlx=() 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:11.259 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:11.259 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:11.259 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:11.259 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # uname 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:11.259 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:11.259 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:11.259 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:11.260 altname enp217s0f0np0 00:33:11.260 altname ens818f0np0 00:33:11.260 inet 192.168.100.8/24 scope global mlx_0_0 00:33:11.260 valid_lft forever preferred_lft forever 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:11.260 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:11.260 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:11.260 altname enp217s0f1np1 00:33:11.260 altname ens818f1np1 00:33:11.260 inet 192.168.100.9/24 scope global mlx_0_1 00:33:11.260 valid_lft forever preferred_lft forever 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # return 0 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:11.260 192.168.100.9' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:11.260 192.168.100.9' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # head -n 1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:11.260 192.168.100.9' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # head -n 1 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # tail -n +2 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # nvmfpid=1820288 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # waitforlisten 1820288 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@829 -- # '[' -z 1820288 ']' 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:11.260 07:22:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:11.519 [2024-07-24 07:22:25.926474] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:33:11.519 [2024-07-24 07:22:25.926574] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.519 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.519 [2024-07-24 07:22:26.079438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:11.778 [2024-07-24 07:22:26.293785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.778 [2024-07-24 07:22:26.293831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.778 [2024-07-24 07:22:26.293849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.778 [2024-07-24 07:22:26.293861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.778 [2024-07-24 07:22:26.293873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.778 [2024-07-24 07:22:26.294151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.778 [2024-07-24 07:22:26.294162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@862 -- # return 0 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:12.345 [2024-07-24 07:22:26.759002] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f6a6ff73940) succeed. 00:33:12.345 [2024-07-24 07:22:26.768324] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f6a6ff2f940) succeed. 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.345 07:22:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:12.914 Malloc0 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:12.914 [2024-07-24 07:22:27.281729] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # config=() 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # local subsystem config 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:12.914 { 00:33:12.914 "params": { 00:33:12.914 "name": "Nvme$subsystem", 00:33:12.914 "trtype": "$TEST_TRANSPORT", 00:33:12.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:12.914 "adrfam": "ipv4", 00:33:12.914 "trsvcid": "$NVMF_PORT", 00:33:12.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:12.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:12.914 "hdgst": ${hdgst:-false}, 00:33:12.914 "ddgst": ${ddgst:-false} 00:33:12.914 }, 00:33:12.914 "method": "bdev_nvme_attach_controller" 00:33:12.914 } 00:33:12.914 EOF 00:33:12.914 )") 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # cat 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # jq . 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@557 -- # IFS=, 00:33:12.914 07:22:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:12.914 "params": { 00:33:12.914 "name": "Nvme0", 00:33:12.914 "trtype": "rdma", 00:33:12.914 "traddr": "192.168.100.8", 00:33:12.914 "adrfam": "ipv4", 00:33:12.914 "trsvcid": "4420", 00:33:12.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:12.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:12.914 "hdgst": false, 00:33:12.914 "ddgst": false 00:33:12.914 }, 00:33:12.914 "method": "bdev_nvme_attach_controller" 00:33:12.914 }' 00:33:12.914 [2024-07-24 07:22:27.357857] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:33:12.914 [2024-07-24 07:22:27.357946] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820581 ] 00:33:12.914 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.914 [2024-07-24 07:22:27.500144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:13.173 [2024-07-24 07:22:27.720859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:13.173 [2024-07-24 07:22:27.720871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:19.741 bdev Nvme0n1 reports 1 memory domains 00:33:19.741 bdev Nvme0n1 supports RDMA memory domain 00:33:19.741 Initialization complete, running randrw IO for 5 sec on 2 cores 00:33:19.741 ========================================================================== 00:33:19.741 Latency [us] 00:33:19.741 IOPS MiB/s Average min max 00:33:19.741 Core 2: 19855.31 77.56 805.01 273.44 12853.04 00:33:19.741 Core 3: 19702.94 76.96 811.30 303.69 12889.78 00:33:19.741 ========================================================================== 00:33:19.741 Total : 39558.25 154.52 808.14 273.44 12889.78 00:33:19.741 00:33:19.741 Total operations: 197830, translate 197830 pull_push 0 memzero 0 00:33:19.741 07:22:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:33:19.741 07:22:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:33:19.741 07:22:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:33:19.999 [2024-07-24 07:22:34.394399] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:33:19.999 [2024-07-24 07:22:34.394493] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821759 ] 00:33:19.999 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.999 [2024-07-24 07:22:34.538375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:20.259 [2024-07-24 07:22:34.755091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.259 [2024-07-24 07:22:34.755100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:28.382 bdev Malloc0 reports 2 memory domains 00:33:28.382 bdev Malloc0 doesn't support RDMA memory domain 00:33:28.382 Initialization complete, running randrw IO for 5 sec on 2 cores 00:33:28.382 ========================================================================== 00:33:28.382 Latency [us] 00:33:28.382 IOPS MiB/s Average min max 00:33:28.382 Core 2: 12760.56 49.85 1252.93 430.06 2097.01 00:33:28.382 Core 3: 12922.10 50.48 1237.23 451.95 1554.31 00:33:28.382 ========================================================================== 00:33:28.382 Total : 25682.67 100.32 1245.03 430.06 2097.01 00:33:28.382 00:33:28.382 Total operations: 128464, translate 0 pull_push 513856 memzero 0 00:33:28.382 07:22:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:33:28.382 07:22:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:33:28.382 07:22:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:33:28.382 07:22:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:33:28.382 Ignoring -M option 00:33:28.382 [2024-07-24 07:22:41.825238] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:33:28.382 [2024-07-24 07:22:41.825329] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1822968 ] 00:33:28.382 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.382 [2024-07-24 07:22:41.965736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:28.382 [2024-07-24 07:22:42.179588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.382 [2024-07-24 07:22:42.179597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:34.986 bdev 40b7a957-d8e9-4bc1-97e5-998906e64d70 reports 1 memory domains 00:33:34.986 bdev 40b7a957-d8e9-4bc1-97e5-998906e64d70 supports RDMA memory domain 00:33:34.986 Initialization complete, running randread IO for 5 sec on 2 cores 00:33:34.986 ========================================================================== 00:33:34.986 Latency [us] 00:33:34.986 IOPS MiB/s Average min max 00:33:34.986 Core 2: 64403.17 251.57 247.49 82.95 1967.09 00:33:34.986 Core 3: 65976.29 257.72 241.59 77.68 1842.10 00:33:34.986 ========================================================================== 00:33:34.986 Total : 130379.46 509.29 244.50 77.68 1967.09 00:33:34.986 00:33:34.986 Total operations: 652011, translate 0 pull_push 0 memzero 652011 00:33:34.986 07:22:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:33:34.986 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.986 [2024-07-24 07:22:48.980477] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:36.892 Initializing NVMe Controllers 00:33:36.892 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:33:36.892 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:36.892 Initialization complete. Launching workers. 00:33:36.892 ======================================================== 00:33:36.892 Latency(us) 00:33:36.892 Device Information : IOPS MiB/s Average min max 00:33:36.892 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2008.91 7.85 7964.05 5985.75 8978.39 00:33:36.892 ======================================================== 00:33:36.892 Total : 2008.91 7.85 7964.05 5985.75 8978.39 00:33:36.892 00:33:36.892 07:22:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:33:36.892 07:22:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:33:36.892 07:22:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:33:36.892 07:22:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:33:36.892 [2024-07-24 07:22:51.445932] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:33:36.892 [2024-07-24 07:22:51.446020] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1824558 ] 00:33:37.151 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.151 [2024-07-24 07:22:51.588013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:37.410 [2024-07-24 07:22:51.805077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:37.410 [2024-07-24 07:22:51.805087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:43.977 bdev 421daad8-1b12-418a-a59b-a2b8185f0e5d reports 1 memory domains 00:33:43.977 bdev 421daad8-1b12-418a-a59b-a2b8185f0e5d supports RDMA memory domain 00:33:43.977 Initialization complete, running randrw IO for 5 sec on 2 cores 00:33:43.977 ========================================================================== 00:33:43.977 Latency [us] 00:33:43.977 IOPS MiB/s Average min max 00:33:43.977 Core 2: 17216.31 67.25 928.41 12.63 6161.78 00:33:43.977 Core 3: 17578.97 68.67 909.27 11.96 6400.35 00:33:43.977 ========================================================================== 00:33:43.977 Total : 34795.28 135.92 918.74 11.96 6400.35 00:33:43.977 00:33:43.977 Total operations: 174046, translate 173905 pull_push 0 memzero 141 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # sync 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@120 -- # set +e 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:43.977 rmmod nvme_rdma 00:33:43.977 rmmod nvme_fabrics 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set -e 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # return 0 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # '[' -n 1820288 ']' 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@490 -- # killprocess 1820288 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@948 -- # '[' -z 1820288 ']' 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@952 -- # kill -0 1820288 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@953 -- # uname 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1820288 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1820288' 00:33:43.977 killing process with pid 1820288 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@967 -- # kill 1820288 00:33:43.977 07:22:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # wait 1820288 00:33:46.513 07:23:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:46.513 07:23:00 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:46.513 00:33:46.513 real 0m43.263s 00:33:46.513 user 2m2.640s 00:33:46.513 sys 0m8.348s 00:33:46.513 07:23:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:46.513 07:23:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:46.513 ************************************ 00:33:46.513 END TEST dma 00:33:46.513 ************************************ 00:33:46.513 07:23:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:33:46.513 07:23:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:46.513 07:23:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:46.513 07:23:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.513 ************************************ 00:33:46.513 START TEST nvmf_identify 00:33:46.513 ************************************ 00:33:46.513 07:23:00 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:33:46.513 * Looking for test storage... 00:33:46.513 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.513 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:33:46.514 07:23:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:54.635 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:54.635 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:54.635 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.635 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:54.636 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:54.636 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:54.636 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:54.636 altname enp217s0f0np0 00:33:54.636 altname ens818f0np0 00:33:54.636 inet 192.168.100.8/24 scope global mlx_0_0 00:33:54.636 valid_lft forever preferred_lft forever 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:54.636 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:54.636 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:54.636 altname enp217s0f1np1 00:33:54.636 altname ens818f1np1 00:33:54.636 inet 192.168.100.9/24 scope global mlx_0_1 00:33:54.636 valid_lft forever preferred_lft forever 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:54.636 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:54.896 192.168.100.9' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:54.896 192.168.100.9' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:54.896 192.168.100.9' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1830051 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1830051 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1830051 ']' 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:54.896 07:23:09 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:54.896 [2024-07-24 07:23:09.428962] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:33:54.896 [2024-07-24 07:23:09.429058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.896 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.156 [2024-07-24 07:23:09.576316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:55.415 [2024-07-24 07:23:09.786880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.415 [2024-07-24 07:23:09.786919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.415 [2024-07-24 07:23:09.786934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.415 [2024-07-24 07:23:09.786945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.415 [2024-07-24 07:23:09.786957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.415 [2024-07-24 07:23:09.787040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.415 [2024-07-24 07:23:09.787120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.415 [2024-07-24 07:23:09.787188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.415 [2024-07-24 07:23:09.787199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:55.674 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:55.674 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:33:55.674 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:55.674 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.674 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:55.674 [2024-07-24 07:23:10.245648] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f502eb26940) succeed. 00:33:55.674 [2024-07-24 07:23:10.255145] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f502eae2940) succeed. 00:33:55.934 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.934 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:33:55.934 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:55.934 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:56.193 Malloc0 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:56.193 [2024-07-24 07:23:10.716979] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.193 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:56.193 [ 00:33:56.193 { 00:33:56.193 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:56.193 "subtype": "Discovery", 00:33:56.193 "listen_addresses": [ 00:33:56.193 { 00:33:56.193 "trtype": "RDMA", 00:33:56.193 "adrfam": "IPv4", 00:33:56.193 "traddr": "192.168.100.8", 00:33:56.193 "trsvcid": "4420" 00:33:56.193 } 00:33:56.193 ], 00:33:56.193 "allow_any_host": true, 00:33:56.193 "hosts": [] 00:33:56.193 }, 00:33:56.193 { 00:33:56.193 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.193 "subtype": "NVMe", 00:33:56.193 "listen_addresses": [ 00:33:56.193 { 00:33:56.193 "trtype": "RDMA", 00:33:56.193 "adrfam": "IPv4", 00:33:56.193 "traddr": "192.168.100.8", 00:33:56.193 "trsvcid": "4420" 00:33:56.193 } 00:33:56.193 ], 00:33:56.194 "allow_any_host": true, 00:33:56.194 "hosts": [], 00:33:56.194 "serial_number": "SPDK00000000000001", 00:33:56.194 "model_number": "SPDK bdev Controller", 00:33:56.194 "max_namespaces": 32, 00:33:56.194 "min_cntlid": 1, 00:33:56.194 "max_cntlid": 65519, 00:33:56.194 "namespaces": [ 00:33:56.194 { 00:33:56.194 "nsid": 1, 00:33:56.194 "bdev_name": "Malloc0", 00:33:56.194 "name": "Malloc0", 00:33:56.194 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:33:56.194 "eui64": "ABCDEF0123456789", 00:33:56.194 "uuid": "f3084970-8dfd-44bb-a46d-8d7aad052935" 00:33:56.194 } 00:33:56.194 ] 00:33:56.194 } 00:33:56.194 ] 00:33:56.194 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.194 07:23:10 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:33:56.194 [2024-07-24 07:23:10.797302] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:33:56.194 [2024-07-24 07:23:10.797372] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830336 ] 00:33:56.456 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.456 [2024-07-24 07:23:10.866843] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:33:56.456 [2024-07-24 07:23:10.866957] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:33:56.456 [2024-07-24 07:23:10.866983] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:33:56.456 [2024-07-24 07:23:10.866991] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:33:56.456 [2024-07-24 07:23:10.867035] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:33:56.456 [2024-07-24 07:23:10.878070] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:33:56.456 [2024-07-24 07:23:10.892932] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:33:56.456 [2024-07-24 07:23:10.892953] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:33:56.456 [2024-07-24 07:23:10.892975] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.892986] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.892998] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893006] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893017] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893025] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893034] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893043] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893052] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893060] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893071] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893079] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893089] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893097] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893108] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893116] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893129] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893137] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893149] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893157] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893166] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893174] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893184] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893192] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893202] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893210] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893221] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893229] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893245] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893253] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893266] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893273] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:33:56.456 [2024-07-24 07:23:10.893284] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:33:56.456 [2024-07-24 07:23:10.893294] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:33:56.456 [2024-07-24 07:23:10.893322] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.893341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x180600 00:33:56.456 [2024-07-24 07:23:10.898641] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.456 [2024-07-24 07:23:10.898663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:33:56.456 [2024-07-24 07:23:10.898677] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.898697] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:56.456 [2024-07-24 07:23:10.898713] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:33:56.456 [2024-07-24 07:23:10.898723] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:33:56.456 [2024-07-24 07:23:10.898748] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.898768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.456 [2024-07-24 07:23:10.898805] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.456 [2024-07-24 07:23:10.898815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:33:56.456 [2024-07-24 07:23:10.898831] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:33:56.456 [2024-07-24 07:23:10.898845] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.898858] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:33:56.456 [2024-07-24 07:23:10.898870] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.898886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.456 [2024-07-24 07:23:10.898901] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.456 [2024-07-24 07:23:10.898912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:33:56.456 [2024-07-24 07:23:10.898921] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:33:56.456 [2024-07-24 07:23:10.898932] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.898942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:33:56.456 [2024-07-24 07:23:10.898956] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.898970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.456 [2024-07-24 07:23:10.898997] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.456 [2024-07-24 07:23:10.899005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.456 [2024-07-24 07:23:10.899016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:56.456 [2024-07-24 07:23:10.899028] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.899043] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.456 [2024-07-24 07:23:10.899054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.456 [2024-07-24 07:23:10.899079] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.899087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.899100] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:33:56.457 [2024-07-24 07:23:10.899109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:33:56.457 [2024-07-24 07:23:10.899120] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:56.457 [2024-07-24 07:23:10.899242] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:33:56.457 [2024-07-24 07:23:10.899250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:56.457 [2024-07-24 07:23:10.899268] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.457 [2024-07-24 07:23:10.899306] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.899314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.899325] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:56.457 [2024-07-24 07:23:10.899336] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899350] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.457 [2024-07-24 07:23:10.899390] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.899398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.899411] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:56.457 [2024-07-24 07:23:10.899420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:33:56.457 [2024-07-24 07:23:10.899431] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899440] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:33:56.457 [2024-07-24 07:23:10.899460] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:33:56.457 [2024-07-24 07:23:10.899479] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180600 00:33:56.457 [2024-07-24 07:23:10.899538] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.899549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.899565] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:33:56.457 [2024-07-24 07:23:10.899577] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:33:56.457 [2024-07-24 07:23:10.899586] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:33:56.457 [2024-07-24 07:23:10.899597] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:33:56.457 [2024-07-24 07:23:10.899606] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:33:56.457 [2024-07-24 07:23:10.899617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:33:56.457 [2024-07-24 07:23:10.899631] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:33:56.457 [2024-07-24 07:23:10.899662] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.457 [2024-07-24 07:23:10.899705] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.899716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.899727] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.457 [2024-07-24 07:23:10.899753] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.457 [2024-07-24 07:23:10.899775] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.457 [2024-07-24 07:23:10.899796] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.457 [2024-07-24 07:23:10.899817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:33:56.457 [2024-07-24 07:23:10.899828] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:56.457 [2024-07-24 07:23:10.899862] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.457 [2024-07-24 07:23:10.899897] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.899906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.899917] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:33:56.457 [2024-07-24 07:23:10.899926] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:33:56.457 [2024-07-24 07:23:10.899936] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899953] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.899969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180600 00:33:56.457 [2024-07-24 07:23:10.900006] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.900017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.900031] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.900047] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:33:56.457 [2024-07-24 07:23:10.900092] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.900107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x400 key:0x180600 00:33:56.457 [2024-07-24 07:23:10.900120] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.900135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.457 [2024-07-24 07:23:10.900179] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.900193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.900215] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.900233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180600 00:33:56.457 [2024-07-24 07:23:10.900242] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.900256] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.900264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.900274] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.900282] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.900292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.900308] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.900322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180600 00:33:56.457 [2024-07-24 07:23:10.900331] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180600 00:33:56.457 [2024-07-24 07:23:10.900358] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.457 [2024-07-24 07:23:10.900366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:56.457 [2024-07-24 07:23:10.900387] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180600 00:33:56.457 ===================================================== 00:33:56.457 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:56.458 ===================================================== 00:33:56.458 Controller Capabilities/Features 00:33:56.458 ================================ 00:33:56.458 Vendor ID: 0000 00:33:56.458 Subsystem Vendor ID: 0000 00:33:56.458 Serial Number: .................... 00:33:56.458 Model Number: ........................................ 00:33:56.458 Firmware Version: 24.09 00:33:56.458 Recommended Arb Burst: 0 00:33:56.458 IEEE OUI Identifier: 00 00 00 00:33:56.458 Multi-path I/O 00:33:56.458 May have multiple subsystem ports: No 00:33:56.458 May have multiple controllers: No 00:33:56.458 Associated with SR-IOV VF: No 00:33:56.458 Max Data Transfer Size: 131072 00:33:56.458 Max Number of Namespaces: 0 00:33:56.458 Max Number of I/O Queues: 1024 00:33:56.458 NVMe Specification Version (VS): 1.3 00:33:56.458 NVMe Specification Version (Identify): 1.3 00:33:56.458 Maximum Queue Entries: 128 00:33:56.458 Contiguous Queues Required: Yes 00:33:56.458 Arbitration Mechanisms Supported 00:33:56.458 Weighted Round Robin: Not Supported 00:33:56.458 Vendor Specific: Not Supported 00:33:56.458 Reset Timeout: 15000 ms 00:33:56.458 Doorbell Stride: 4 bytes 00:33:56.458 NVM Subsystem Reset: Not Supported 00:33:56.458 Command Sets Supported 00:33:56.458 NVM Command Set: Supported 00:33:56.458 Boot Partition: Not Supported 00:33:56.458 Memory Page Size Minimum: 4096 bytes 00:33:56.458 Memory Page Size Maximum: 4096 bytes 00:33:56.458 Persistent Memory Region: Not Supported 00:33:56.458 Optional Asynchronous Events Supported 00:33:56.458 Namespace Attribute Notices: Not Supported 00:33:56.458 Firmware Activation Notices: Not Supported 00:33:56.458 ANA Change Notices: Not Supported 00:33:56.458 PLE Aggregate Log Change Notices: Not Supported 00:33:56.458 LBA Status Info Alert Notices: Not Supported 00:33:56.458 EGE Aggregate Log Change Notices: Not Supported 00:33:56.458 Normal NVM Subsystem Shutdown event: Not Supported 00:33:56.458 Zone Descriptor Change Notices: Not Supported 00:33:56.458 Discovery Log Change Notices: Supported 00:33:56.458 Controller Attributes 00:33:56.458 128-bit Host Identifier: Not Supported 00:33:56.458 Non-Operational Permissive Mode: Not Supported 00:33:56.458 NVM Sets: Not Supported 00:33:56.458 Read Recovery Levels: Not Supported 00:33:56.458 Endurance Groups: Not Supported 00:33:56.458 Predictable Latency Mode: Not Supported 00:33:56.458 Traffic Based Keep ALive: Not Supported 00:33:56.458 Namespace Granularity: Not Supported 00:33:56.458 SQ Associations: Not Supported 00:33:56.458 UUID List: Not Supported 00:33:56.458 Multi-Domain Subsystem: Not Supported 00:33:56.458 Fixed Capacity Management: Not Supported 00:33:56.458 Variable Capacity Management: Not Supported 00:33:56.458 Delete Endurance Group: Not Supported 00:33:56.458 Delete NVM Set: Not Supported 00:33:56.458 Extended LBA Formats Supported: Not Supported 00:33:56.458 Flexible Data Placement Supported: Not Supported 00:33:56.458 00:33:56.458 Controller Memory Buffer Support 00:33:56.458 ================================ 00:33:56.458 Supported: No 00:33:56.458 00:33:56.458 Persistent Memory Region Support 00:33:56.458 ================================ 00:33:56.458 Supported: No 00:33:56.458 00:33:56.458 Admin Command Set Attributes 00:33:56.458 ============================ 00:33:56.458 Security Send/Receive: Not Supported 00:33:56.458 Format NVM: Not Supported 00:33:56.458 Firmware Activate/Download: Not Supported 00:33:56.458 Namespace Management: Not Supported 00:33:56.458 Device Self-Test: Not Supported 00:33:56.458 Directives: Not Supported 00:33:56.458 NVMe-MI: Not Supported 00:33:56.458 Virtualization Management: Not Supported 00:33:56.458 Doorbell Buffer Config: Not Supported 00:33:56.458 Get LBA Status Capability: Not Supported 00:33:56.458 Command & Feature Lockdown Capability: Not Supported 00:33:56.458 Abort Command Limit: 1 00:33:56.458 Async Event Request Limit: 4 00:33:56.458 Number of Firmware Slots: N/A 00:33:56.458 Firmware Slot 1 Read-Only: N/A 00:33:56.458 Firmware Activation Without Reset: N/A 00:33:56.458 Multiple Update Detection Support: N/A 00:33:56.458 Firmware Update Granularity: No Information Provided 00:33:56.458 Per-Namespace SMART Log: No 00:33:56.458 Asymmetric Namespace Access Log Page: Not Supported 00:33:56.458 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:56.458 Command Effects Log Page: Not Supported 00:33:56.458 Get Log Page Extended Data: Supported 00:33:56.458 Telemetry Log Pages: Not Supported 00:33:56.458 Persistent Event Log Pages: Not Supported 00:33:56.458 Supported Log Pages Log Page: May Support 00:33:56.458 Commands Supported & Effects Log Page: Not Supported 00:33:56.458 Feature Identifiers & Effects Log Page:May Support 00:33:56.458 NVMe-MI Commands & Effects Log Page: May Support 00:33:56.458 Data Area 4 for Telemetry Log: Not Supported 00:33:56.458 Error Log Page Entries Supported: 128 00:33:56.458 Keep Alive: Not Supported 00:33:56.458 00:33:56.458 NVM Command Set Attributes 00:33:56.458 ========================== 00:33:56.458 Submission Queue Entry Size 00:33:56.458 Max: 1 00:33:56.458 Min: 1 00:33:56.458 Completion Queue Entry Size 00:33:56.458 Max: 1 00:33:56.458 Min: 1 00:33:56.458 Number of Namespaces: 0 00:33:56.458 Compare Command: Not Supported 00:33:56.458 Write Uncorrectable Command: Not Supported 00:33:56.458 Dataset Management Command: Not Supported 00:33:56.458 Write Zeroes Command: Not Supported 00:33:56.458 Set Features Save Field: Not Supported 00:33:56.458 Reservations: Not Supported 00:33:56.458 Timestamp: Not Supported 00:33:56.458 Copy: Not Supported 00:33:56.458 Volatile Write Cache: Not Present 00:33:56.458 Atomic Write Unit (Normal): 1 00:33:56.458 Atomic Write Unit (PFail): 1 00:33:56.458 Atomic Compare & Write Unit: 1 00:33:56.458 Fused Compare & Write: Supported 00:33:56.458 Scatter-Gather List 00:33:56.458 SGL Command Set: Supported 00:33:56.458 SGL Keyed: Supported 00:33:56.458 SGL Bit Bucket Descriptor: Not Supported 00:33:56.458 SGL Metadata Pointer: Not Supported 00:33:56.458 Oversized SGL: Not Supported 00:33:56.458 SGL Metadata Address: Not Supported 00:33:56.458 SGL Offset: Supported 00:33:56.458 Transport SGL Data Block: Not Supported 00:33:56.458 Replay Protected Memory Block: Not Supported 00:33:56.458 00:33:56.458 Firmware Slot Information 00:33:56.458 ========================= 00:33:56.458 Active slot: 0 00:33:56.458 00:33:56.458 00:33:56.458 Error Log 00:33:56.458 ========= 00:33:56.458 00:33:56.458 Active Namespaces 00:33:56.458 ================= 00:33:56.458 Discovery Log Page 00:33:56.458 ================== 00:33:56.458 Generation Counter: 2 00:33:56.458 Number of Records: 2 00:33:56.458 Record Format: 0 00:33:56.458 00:33:56.458 Discovery Log Entry 0 00:33:56.458 ---------------------- 00:33:56.458 Transport Type: 1 (RDMA) 00:33:56.458 Address Family: 1 (IPv4) 00:33:56.458 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:56.458 Entry Flags: 00:33:56.458 Duplicate Returned Information: 1 00:33:56.458 Explicit Persistent Connection Support for Discovery: 1 00:33:56.458 Transport Requirements: 00:33:56.458 Secure Channel: Not Required 00:33:56.458 Port ID: 0 (0x0000) 00:33:56.458 Controller ID: 65535 (0xffff) 00:33:56.458 Admin Max SQ Size: 128 00:33:56.458 Transport Service Identifier: 4420 00:33:56.458 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:56.458 Transport Address: 192.168.100.8 00:33:56.458 Transport Specific Address Subtype - RDMA 00:33:56.458 RDMA QP Service Type: 1 (Reliable Connected) 00:33:56.458 RDMA Provider Type: 1 (No provider specified) 00:33:56.458 RDMA CM Service: 1 (RDMA_CM) 00:33:56.458 Discovery Log Entry 1 00:33:56.458 ---------------------- 00:33:56.458 Transport Type: 1 (RDMA) 00:33:56.458 Address Family: 1 (IPv4) 00:33:56.458 Subsystem Type: 2 (NVM Subsystem) 00:33:56.458 Entry Flags: 00:33:56.458 Duplicate Returned Information: 0 00:33:56.458 Explicit Persistent Connection Support for Discovery: 0 00:33:56.458 Transport Requirements: 00:33:56.458 Secure Channel: Not Required 00:33:56.458 Port ID: 0 (0x0000) 00:33:56.458 Controller ID: 65535 (0xffff) 00:33:56.458 Admin Max SQ Size: [2024-07-24 07:23:10.900504] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:33:56.458 [2024-07-24 07:23:10.900523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.900536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.900548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.900558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.900572] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.900611] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.900632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.900652] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.900675] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900696] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.900709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.900718] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:33:56.459 [2024-07-24 07:23:10.900733] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:33:56.459 [2024-07-24 07:23:10.900743] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900757] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.900790] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.900798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.900809] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900821] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.900871] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.900882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.900891] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900906] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.900936] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.900944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.900957] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900969] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.900981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901003] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901022] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901036] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901075] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901096] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901108] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901137] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901156] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901172] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901207] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901225] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901237] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901273] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901292] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901306] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901340] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901359] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901370] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901407] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901426] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901444] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901493] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901512] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901524] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901561] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901579] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901593] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901644] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901666] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901677] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.459 [2024-07-24 07:23:10.901707] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.459 [2024-07-24 07:23:10.901718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:33:56.459 [2024-07-24 07:23:10.901726] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901740] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.459 [2024-07-24 07:23:10.901750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.901773] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.901781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.901792] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.901804] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.901818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.901834] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.901847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.901855] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.901869] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.901881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.901907] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.901915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.901927] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.901939] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.901953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.901974] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.901985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.901993] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902010] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.902041] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.902049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.902062] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902074] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.902113] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.902123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.902132] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902145] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.902176] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.902187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.902198] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902209] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.902238] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.902248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.902257] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902274] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.902311] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.902319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.902329] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902341] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.902376] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.902386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.902399] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902413] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.902444] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.902452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.902463] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902475] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.902505] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.902515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.902523] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902539] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.902574] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.902592] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902604] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.902616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.906640] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.906661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:33:56.460 [2024-07-24 07:23:10.906671] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.906692] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.460 [2024-07-24 07:23:10.906704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.460 [2024-07-24 07:23:10.906735] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.460 [2024-07-24 07:23:10.906745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000e p:0 m:0 dnr:0 00:33:56.461 [2024-07-24 07:23:10.906756] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180600 00:33:56.461 [2024-07-24 07:23:10.906766] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:33:56.461 128 00:33:56.461 Transport Service Identifier: 4420 00:33:56.461 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:33:56.461 Transport Address: 192.168.100.8 00:33:56.461 Transport Specific Address Subtype - RDMA 00:33:56.461 RDMA QP Service Type: 1 (Reliable Connected) 00:33:56.461 RDMA Provider Type: 1 (No provider specified) 00:33:56.461 RDMA CM Service: 1 (RDMA_CM) 00:33:56.461 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:33:56.461 [2024-07-24 07:23:11.069142] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:33:56.461 [2024-07-24 07:23:11.069212] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830339 ] 00:33:56.723 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.723 [2024-07-24 07:23:11.138792] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:33:56.723 [2024-07-24 07:23:11.138899] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:33:56.723 [2024-07-24 07:23:11.138930] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:33:56.723 [2024-07-24 07:23:11.138938] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:33:56.723 [2024-07-24 07:23:11.138978] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:33:56.723 [2024-07-24 07:23:11.150099] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:33:56.723 [2024-07-24 07:23:11.160565] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:33:56.723 [2024-07-24 07:23:11.160583] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:33:56.723 [2024-07-24 07:23:11.160601] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160615] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160631] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160639] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160649] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160658] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160670] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160678] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160689] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160698] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160707] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160716] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160725] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160733] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160743] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160751] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160761] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160770] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160782] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160790] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160799] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160807] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160817] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160825] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160834] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160842] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160853] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160861] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160877] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160885] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160895] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160903] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:33:56.723 [2024-07-24 07:23:11.160913] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:33:56.723 [2024-07-24 07:23:11.160921] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:33:56.723 [2024-07-24 07:23:11.160951] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.160969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x180600 00:33:56.723 [2024-07-24 07:23:11.165637] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.723 [2024-07-24 07:23:11.165660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:33:56.723 [2024-07-24 07:23:11.165679] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.165691] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:56.723 [2024-07-24 07:23:11.165707] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:33:56.723 [2024-07-24 07:23:11.165717] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:33:56.723 [2024-07-24 07:23:11.165745] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.165757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.723 [2024-07-24 07:23:11.165791] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.723 [2024-07-24 07:23:11.165800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:33:56.723 [2024-07-24 07:23:11.165816] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:33:56.723 [2024-07-24 07:23:11.165827] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.165839] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:33:56.723 [2024-07-24 07:23:11.165851] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.165866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.723 [2024-07-24 07:23:11.165884] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.723 [2024-07-24 07:23:11.165895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:33:56.723 [2024-07-24 07:23:11.165904] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:33:56.723 [2024-07-24 07:23:11.165916] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.165926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:33:56.723 [2024-07-24 07:23:11.165939] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.165950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.723 [2024-07-24 07:23:11.165977] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.723 [2024-07-24 07:23:11.165985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.723 [2024-07-24 07:23:11.165998] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:56.723 [2024-07-24 07:23:11.166009] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180600 00:33:56.723 [2024-07-24 07:23:11.166023] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.724 [2024-07-24 07:23:11.166060] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.724 [2024-07-24 07:23:11.166068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.724 [2024-07-24 07:23:11.166081] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:33:56.724 [2024-07-24 07:23:11.166091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:33:56.724 [2024-07-24 07:23:11.166102] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:56.724 [2024-07-24 07:23:11.166223] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:33:56.724 [2024-07-24 07:23:11.166230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:56.724 [2024-07-24 07:23:11.166246] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.724 [2024-07-24 07:23:11.166280] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.724 [2024-07-24 07:23:11.166288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.724 [2024-07-24 07:23:11.166298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:56.724 [2024-07-24 07:23:11.166309] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166323] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.724 [2024-07-24 07:23:11.166362] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.724 [2024-07-24 07:23:11.166370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:33:56.724 [2024-07-24 07:23:11.166382] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:56.724 [2024-07-24 07:23:11.166391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.166401] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166411] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:33:56.724 [2024-07-24 07:23:11.166426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.166446] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180600 00:33:56.724 [2024-07-24 07:23:11.166513] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.724 [2024-07-24 07:23:11.166524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.724 [2024-07-24 07:23:11.166538] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:33:56.724 [2024-07-24 07:23:11.166552] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:33:56.724 [2024-07-24 07:23:11.166560] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:33:56.724 [2024-07-24 07:23:11.166572] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:33:56.724 [2024-07-24 07:23:11.166580] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:33:56.724 [2024-07-24 07:23:11.166590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.166599] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.166631] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.724 [2024-07-24 07:23:11.166672] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.724 [2024-07-24 07:23:11.166683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:56.724 [2024-07-24 07:23:11.166695] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.724 [2024-07-24 07:23:11.166729] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.724 [2024-07-24 07:23:11.166751] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.724 [2024-07-24 07:23:11.166771] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.724 [2024-07-24 07:23:11.166790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.166801] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.166828] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.724 [2024-07-24 07:23:11.166868] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.724 [2024-07-24 07:23:11.166876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:33:56.724 [2024-07-24 07:23:11.166888] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:33:56.724 [2024-07-24 07:23:11.166896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.166907] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.166930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.166940] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.166955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.724 [2024-07-24 07:23:11.166979] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.724 [2024-07-24 07:23:11.166990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:33:56.724 [2024-07-24 07:23:11.167061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.167074] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.167090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.167114] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.167125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180600 00:33:56.724 [2024-07-24 07:23:11.167166] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.724 [2024-07-24 07:23:11.167174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:56.724 [2024-07-24 07:23:11.167197] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:33:56.724 [2024-07-24 07:23:11.167217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.167228] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.167239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.167256] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.167268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180600 00:33:56.724 [2024-07-24 07:23:11.167313] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.724 [2024-07-24 07:23:11.167322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:56.724 [2024-07-24 07:23:11.167346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.167355] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180600 00:33:56.724 [2024-07-24 07:23:11.167367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:33:56.724 [2024-07-24 07:23:11.167383] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180600 00:33:56.725 [2024-07-24 07:23:11.167428] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.167438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.167456] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:33:56.725 [2024-07-24 07:23:11.167467] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:33:56.725 [2024-07-24 07:23:11.167490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:33:56.725 [2024-07-24 07:23:11.167499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:33:56.725 [2024-07-24 07:23:11.167510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:33:56.725 [2024-07-24 07:23:11.167519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:33:56.725 [2024-07-24 07:23:11.167537] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:33:56.725 [2024-07-24 07:23:11.167547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:33:56.725 [2024-07-24 07:23:11.167558] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:33:56.725 [2024-07-24 07:23:11.167588] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.725 [2024-07-24 07:23:11.167612] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.725 [2024-07-24 07:23:11.167652] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.167664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.167673] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167683] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.167691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.167701] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167717] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.725 [2024-07-24 07:23:11.167745] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.167755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.167763] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167776] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.725 [2024-07-24 07:23:11.167814] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.167822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.167834] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167848] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.725 [2024-07-24 07:23:11.167883] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.167893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.167901] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167923] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180600 00:33:56.725 [2024-07-24 07:23:11.167952] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180600 00:33:56.725 [2024-07-24 07:23:11.167981] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.167992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c8000 len:0x200 key:0x180600 00:33:56.725 [2024-07-24 07:23:11.168008] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.168019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x1000 key:0x180600 00:33:56.725 [2024-07-24 07:23:11.168033] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.168041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.168067] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.168076] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.168085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.168098] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.168108] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.168115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.168127] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180600 00:33:56.725 [2024-07-24 07:23:11.168137] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.725 [2024-07-24 07:23:11.168147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:56.725 [2024-07-24 07:23:11.168164] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180600 00:33:56.725 ===================================================== 00:33:56.725 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:56.725 ===================================================== 00:33:56.725 Controller Capabilities/Features 00:33:56.725 ================================ 00:33:56.725 Vendor ID: 8086 00:33:56.725 Subsystem Vendor ID: 8086 00:33:56.725 Serial Number: SPDK00000000000001 00:33:56.725 Model Number: SPDK bdev Controller 00:33:56.725 Firmware Version: 24.09 00:33:56.725 Recommended Arb Burst: 6 00:33:56.725 IEEE OUI Identifier: e4 d2 5c 00:33:56.725 Multi-path I/O 00:33:56.725 May have multiple subsystem ports: Yes 00:33:56.725 May have multiple controllers: Yes 00:33:56.725 Associated with SR-IOV VF: No 00:33:56.725 Max Data Transfer Size: 131072 00:33:56.725 Max Number of Namespaces: 32 00:33:56.725 Max Number of I/O Queues: 127 00:33:56.725 NVMe Specification Version (VS): 1.3 00:33:56.725 NVMe Specification Version (Identify): 1.3 00:33:56.725 Maximum Queue Entries: 128 00:33:56.725 Contiguous Queues Required: Yes 00:33:56.725 Arbitration Mechanisms Supported 00:33:56.725 Weighted Round Robin: Not Supported 00:33:56.725 Vendor Specific: Not Supported 00:33:56.725 Reset Timeout: 15000 ms 00:33:56.725 Doorbell Stride: 4 bytes 00:33:56.725 NVM Subsystem Reset: Not Supported 00:33:56.725 Command Sets Supported 00:33:56.725 NVM Command Set: Supported 00:33:56.725 Boot Partition: Not Supported 00:33:56.725 Memory Page Size Minimum: 4096 bytes 00:33:56.725 Memory Page Size Maximum: 4096 bytes 00:33:56.725 Persistent Memory Region: Not Supported 00:33:56.725 Optional Asynchronous Events Supported 00:33:56.725 Namespace Attribute Notices: Supported 00:33:56.725 Firmware Activation Notices: Not Supported 00:33:56.725 ANA Change Notices: Not Supported 00:33:56.725 PLE Aggregate Log Change Notices: Not Supported 00:33:56.725 LBA Status Info Alert Notices: Not Supported 00:33:56.725 EGE Aggregate Log Change Notices: Not Supported 00:33:56.725 Normal NVM Subsystem Shutdown event: Not Supported 00:33:56.725 Zone Descriptor Change Notices: Not Supported 00:33:56.725 Discovery Log Change Notices: Not Supported 00:33:56.725 Controller Attributes 00:33:56.725 128-bit Host Identifier: Supported 00:33:56.725 Non-Operational Permissive Mode: Not Supported 00:33:56.725 NVM Sets: Not Supported 00:33:56.725 Read Recovery Levels: Not Supported 00:33:56.725 Endurance Groups: Not Supported 00:33:56.725 Predictable Latency Mode: Not Supported 00:33:56.725 Traffic Based Keep ALive: Not Supported 00:33:56.725 Namespace Granularity: Not Supported 00:33:56.725 SQ Associations: Not Supported 00:33:56.725 UUID List: Not Supported 00:33:56.725 Multi-Domain Subsystem: Not Supported 00:33:56.725 Fixed Capacity Management: Not Supported 00:33:56.725 Variable Capacity Management: Not Supported 00:33:56.725 Delete Endurance Group: Not Supported 00:33:56.726 Delete NVM Set: Not Supported 00:33:56.726 Extended LBA Formats Supported: Not Supported 00:33:56.726 Flexible Data Placement Supported: Not Supported 00:33:56.726 00:33:56.726 Controller Memory Buffer Support 00:33:56.726 ================================ 00:33:56.726 Supported: No 00:33:56.726 00:33:56.726 Persistent Memory Region Support 00:33:56.726 ================================ 00:33:56.726 Supported: No 00:33:56.726 00:33:56.726 Admin Command Set Attributes 00:33:56.726 ============================ 00:33:56.726 Security Send/Receive: Not Supported 00:33:56.726 Format NVM: Not Supported 00:33:56.726 Firmware Activate/Download: Not Supported 00:33:56.726 Namespace Management: Not Supported 00:33:56.726 Device Self-Test: Not Supported 00:33:56.726 Directives: Not Supported 00:33:56.726 NVMe-MI: Not Supported 00:33:56.726 Virtualization Management: Not Supported 00:33:56.726 Doorbell Buffer Config: Not Supported 00:33:56.726 Get LBA Status Capability: Not Supported 00:33:56.726 Command & Feature Lockdown Capability: Not Supported 00:33:56.726 Abort Command Limit: 4 00:33:56.726 Async Event Request Limit: 4 00:33:56.726 Number of Firmware Slots: N/A 00:33:56.726 Firmware Slot 1 Read-Only: N/A 00:33:56.726 Firmware Activation Without Reset: N/A 00:33:56.726 Multiple Update Detection Support: N/A 00:33:56.726 Firmware Update Granularity: No Information Provided 00:33:56.726 Per-Namespace SMART Log: No 00:33:56.726 Asymmetric Namespace Access Log Page: Not Supported 00:33:56.726 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:33:56.726 Command Effects Log Page: Supported 00:33:56.726 Get Log Page Extended Data: Supported 00:33:56.726 Telemetry Log Pages: Not Supported 00:33:56.726 Persistent Event Log Pages: Not Supported 00:33:56.726 Supported Log Pages Log Page: May Support 00:33:56.726 Commands Supported & Effects Log Page: Not Supported 00:33:56.726 Feature Identifiers & Effects Log Page:May Support 00:33:56.726 NVMe-MI Commands & Effects Log Page: May Support 00:33:56.726 Data Area 4 for Telemetry Log: Not Supported 00:33:56.726 Error Log Page Entries Supported: 128 00:33:56.726 Keep Alive: Supported 00:33:56.726 Keep Alive Granularity: 10000 ms 00:33:56.726 00:33:56.726 NVM Command Set Attributes 00:33:56.726 ========================== 00:33:56.726 Submission Queue Entry Size 00:33:56.726 Max: 64 00:33:56.726 Min: 64 00:33:56.726 Completion Queue Entry Size 00:33:56.726 Max: 16 00:33:56.726 Min: 16 00:33:56.726 Number of Namespaces: 32 00:33:56.726 Compare Command: Supported 00:33:56.726 Write Uncorrectable Command: Not Supported 00:33:56.726 Dataset Management Command: Supported 00:33:56.726 Write Zeroes Command: Supported 00:33:56.726 Set Features Save Field: Not Supported 00:33:56.726 Reservations: Supported 00:33:56.726 Timestamp: Not Supported 00:33:56.726 Copy: Supported 00:33:56.726 Volatile Write Cache: Present 00:33:56.726 Atomic Write Unit (Normal): 1 00:33:56.726 Atomic Write Unit (PFail): 1 00:33:56.726 Atomic Compare & Write Unit: 1 00:33:56.726 Fused Compare & Write: Supported 00:33:56.726 Scatter-Gather List 00:33:56.726 SGL Command Set: Supported 00:33:56.726 SGL Keyed: Supported 00:33:56.726 SGL Bit Bucket Descriptor: Not Supported 00:33:56.726 SGL Metadata Pointer: Not Supported 00:33:56.726 Oversized SGL: Not Supported 00:33:56.726 SGL Metadata Address: Not Supported 00:33:56.726 SGL Offset: Supported 00:33:56.726 Transport SGL Data Block: Not Supported 00:33:56.726 Replay Protected Memory Block: Not Supported 00:33:56.726 00:33:56.726 Firmware Slot Information 00:33:56.726 ========================= 00:33:56.726 Active slot: 1 00:33:56.726 Slot 1 Firmware Revision: 24.09 00:33:56.726 00:33:56.726 00:33:56.726 Commands Supported and Effects 00:33:56.726 ============================== 00:33:56.726 Admin Commands 00:33:56.726 -------------- 00:33:56.726 Get Log Page (02h): Supported 00:33:56.726 Identify (06h): Supported 00:33:56.726 Abort (08h): Supported 00:33:56.726 Set Features (09h): Supported 00:33:56.726 Get Features (0Ah): Supported 00:33:56.726 Asynchronous Event Request (0Ch): Supported 00:33:56.726 Keep Alive (18h): Supported 00:33:56.726 I/O Commands 00:33:56.726 ------------ 00:33:56.726 Flush (00h): Supported LBA-Change 00:33:56.726 Write (01h): Supported LBA-Change 00:33:56.726 Read (02h): Supported 00:33:56.726 Compare (05h): Supported 00:33:56.726 Write Zeroes (08h): Supported LBA-Change 00:33:56.726 Dataset Management (09h): Supported LBA-Change 00:33:56.726 Copy (19h): Supported LBA-Change 00:33:56.726 00:33:56.726 Error Log 00:33:56.726 ========= 00:33:56.726 00:33:56.726 Arbitration 00:33:56.726 =========== 00:33:56.726 Arbitration Burst: 1 00:33:56.726 00:33:56.726 Power Management 00:33:56.726 ================ 00:33:56.726 Number of Power States: 1 00:33:56.726 Current Power State: Power State #0 00:33:56.726 Power State #0: 00:33:56.726 Max Power: 0.00 W 00:33:56.726 Non-Operational State: Operational 00:33:56.726 Entry Latency: Not Reported 00:33:56.726 Exit Latency: Not Reported 00:33:56.726 Relative Read Throughput: 0 00:33:56.726 Relative Read Latency: 0 00:33:56.726 Relative Write Throughput: 0 00:33:56.726 Relative Write Latency: 0 00:33:56.726 Idle Power: Not Reported 00:33:56.726 Active Power: Not Reported 00:33:56.726 Non-Operational Permissive Mode: Not Supported 00:33:56.726 00:33:56.726 Health Information 00:33:56.726 ================== 00:33:56.726 Critical Warnings: 00:33:56.726 Available Spare Space: OK 00:33:56.726 Temperature: OK 00:33:56.726 Device Reliability: OK 00:33:56.726 Read Only: No 00:33:56.726 Volatile Memory Backup: OK 00:33:56.726 Current Temperature: 0 Kelvin (-273 Celsius) 00:33:56.726 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:33:56.726 Available Spare: 0% 00:33:56.726 Available Spare Threshold: 0% 00:33:56.726 Life Percentage [2024-07-24 07:23:11.168297] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180600 00:33:56.726 [2024-07-24 07:23:11.168312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.726 [2024-07-24 07:23:11.168335] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.726 [2024-07-24 07:23:11.168344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:56.726 [2024-07-24 07:23:11.168355] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180600 00:33:56.726 [2024-07-24 07:23:11.168398] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:33:56.726 [2024-07-24 07:23:11.168417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.726 [2024-07-24 07:23:11.168430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.726 [2024-07-24 07:23:11.168441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.726 [2024-07-24 07:23:11.168451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.726 [2024-07-24 07:23:11.168466] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180600 00:33:56.726 [2024-07-24 07:23:11.168478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.726 [2024-07-24 07:23:11.168509] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.726 [2024-07-24 07:23:11.168520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:33:56.726 [2024-07-24 07:23:11.168535] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.726 [2024-07-24 07:23:11.168546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.726 [2024-07-24 07:23:11.168557] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180600 00:33:56.726 [2024-07-24 07:23:11.168575] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.726 [2024-07-24 07:23:11.168585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:56.726 [2024-07-24 07:23:11.168593] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:33:56.726 [2024-07-24 07:23:11.168604] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:33:56.726 [2024-07-24 07:23:11.168613] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180600 00:33:56.726 [2024-07-24 07:23:11.168636] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.726 [2024-07-24 07:23:11.168656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.726 [2024-07-24 07:23:11.168679] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.726 [2024-07-24 07:23:11.168687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:33:56.726 [2024-07-24 07:23:11.168697] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168709] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.168744] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.168754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.168762] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168776] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.168817] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.168825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.168836] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168849] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.168882] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.168894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.168902] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168915] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.168949] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.168957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.168967] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168979] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.168991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169012] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169032] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169045] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169085] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169104] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169117] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169149] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169167] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169180] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169215] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169233] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169246] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169275] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169295] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169308] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169342] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169360] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169374] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169405] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169424] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169437] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169468] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169489] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169502] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169533] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169555] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169568] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.169579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.169599] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.727 [2024-07-24 07:23:11.169609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:33:56.727 [2024-07-24 07:23:11.169619] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.173648] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180600 00:33:56.727 [2024-07-24 07:23:11.173670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:33:56.727 [2024-07-24 07:23:11.173694] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:33:56.728 [2024-07-24 07:23:11.173705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:33:56.728 [2024-07-24 07:23:11.173714] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180600 00:33:56.728 [2024-07-24 07:23:11.173730] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:33:56.728 Used: 0% 00:33:56.728 Data Units Read: 0 00:33:56.728 Data Units Written: 0 00:33:56.728 Host Read Commands: 0 00:33:56.728 Host Write Commands: 0 00:33:56.728 Controller Busy Time: 0 minutes 00:33:56.728 Power Cycles: 0 00:33:56.728 Power On Hours: 0 hours 00:33:56.728 Unsafe Shutdowns: 0 00:33:56.728 Unrecoverable Media Errors: 0 00:33:56.728 Lifetime Error Log Entries: 0 00:33:56.728 Warning Temperature Time: 0 minutes 00:33:56.728 Critical Temperature Time: 0 minutes 00:33:56.728 00:33:56.728 Number of Queues 00:33:56.728 ================ 00:33:56.728 Number of I/O Submission Queues: 127 00:33:56.728 Number of I/O Completion Queues: 127 00:33:56.728 00:33:56.728 Active Namespaces 00:33:56.728 ================= 00:33:56.728 Namespace ID:1 00:33:56.728 Error Recovery Timeout: Unlimited 00:33:56.728 Command Set Identifier: NVM (00h) 00:33:56.728 Deallocate: Supported 00:33:56.728 Deallocated/Unwritten Error: Not Supported 00:33:56.728 Deallocated Read Value: Unknown 00:33:56.728 Deallocate in Write Zeroes: Not Supported 00:33:56.728 Deallocated Guard Field: 0xFFFF 00:33:56.728 Flush: Supported 00:33:56.728 Reservation: Supported 00:33:56.728 Namespace Sharing Capabilities: Multiple Controllers 00:33:56.728 Size (in LBAs): 131072 (0GiB) 00:33:56.728 Capacity (in LBAs): 131072 (0GiB) 00:33:56.728 Utilization (in LBAs): 131072 (0GiB) 00:33:56.728 NGUID: ABCDEF0123456789ABCDEF0123456789 00:33:56.728 EUI64: ABCDEF0123456789 00:33:56.728 UUID: f3084970-8dfd-44bb-a46d-8d7aad052935 00:33:56.728 Thin Provisioning: Not Supported 00:33:56.728 Per-NS Atomic Units: Yes 00:33:56.728 Atomic Boundary Size (Normal): 0 00:33:56.728 Atomic Boundary Size (PFail): 0 00:33:56.728 Atomic Boundary Offset: 0 00:33:56.728 Maximum Single Source Range Length: 65535 00:33:56.728 Maximum Copy Length: 65535 00:33:56.728 Maximum Source Range Count: 1 00:33:56.728 NGUID/EUI64 Never Reused: No 00:33:56.728 Namespace Write Protected: No 00:33:56.728 Number of LBA Formats: 1 00:33:56.728 Current LBA Format: LBA Format #00 00:33:56.728 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:56.728 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:56.728 rmmod nvme_rdma 00:33:56.728 rmmod nvme_fabrics 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1830051 ']' 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1830051 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1830051 ']' 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1830051 00:33:56.728 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:33:56.987 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.987 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1830051 00:33:56.987 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:56.987 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:56.987 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1830051' 00:33:56.987 killing process with pid 1830051 00:33:56.987 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1830051 00:33:56.987 07:23:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1830051 00:33:58.940 07:23:13 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:58.940 07:23:13 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:58.940 00:33:58.940 real 0m12.550s 00:33:58.940 user 0m14.617s 00:33:58.940 sys 0m7.168s 00:33:58.940 07:23:13 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:58.940 07:23:13 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.940 ************************************ 00:33:58.940 END TEST nvmf_identify 00:33:58.940 ************************************ 00:33:58.940 07:23:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:33:58.940 07:23:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:58.940 07:23:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:58.940 07:23:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.940 ************************************ 00:33:58.940 START TEST nvmf_perf 00:33:58.940 ************************************ 00:33:58.940 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:33:59.199 * Looking for test storage... 00:33:59.199 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:59.199 07:23:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:07.324 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:07.325 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:07.325 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:07.325 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:07.325 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:07.325 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:07.325 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:07.325 altname enp217s0f0np0 00:34:07.325 altname ens818f0np0 00:34:07.325 inet 192.168.100.8/24 scope global mlx_0_0 00:34:07.325 valid_lft forever preferred_lft forever 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:07.325 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:07.325 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:07.325 altname enp217s0f1np1 00:34:07.325 altname ens818f1np1 00:34:07.325 inet 192.168.100.9/24 scope global mlx_0_1 00:34:07.325 valid_lft forever preferred_lft forever 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:07.325 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:07.326 192.168.100.9' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:07.326 192.168.100.9' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:07.326 192.168.100.9' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1834767 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1834767 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1834767 ']' 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:07.326 07:23:21 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:07.586 [2024-07-24 07:23:21.968304] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:34:07.586 [2024-07-24 07:23:21.968401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.586 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.586 [2024-07-24 07:23:22.114519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:07.845 [2024-07-24 07:23:22.319097] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.845 [2024-07-24 07:23:22.319146] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.845 [2024-07-24 07:23:22.319160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.845 [2024-07-24 07:23:22.319171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.845 [2024-07-24 07:23:22.319183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.845 [2024-07-24 07:23:22.319305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.845 [2024-07-24 07:23:22.319440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:07.845 [2024-07-24 07:23:22.319526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.845 [2024-07-24 07:23:22.319537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:08.413 07:23:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:08.413 07:23:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:34:08.413 07:23:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:08.413 07:23:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:08.413 07:23:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:08.413 07:23:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.413 07:23:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:08.413 07:23:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:11.700 07:23:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:34:11.700 07:23:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:34:11.700 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:34:11.700 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:11.959 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:34:11.959 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:34:11.959 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:34:11.959 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:34:11.959 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:34:11.959 [2024-07-24 07:23:26.502735] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:34:11.959 [2024-07-24 07:23:26.527826] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a1c0/0x7f06e556a940) succeed. 00:34:11.959 [2024-07-24 07:23:26.537713] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a340/0x7f06e5526940) succeed. 00:34:12.218 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.478 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:34:12.478 07:23:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.478 07:23:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:34:12.478 07:23:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:12.737 07:23:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:12.997 [2024-07-24 07:23:27.433037] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:12.997 07:23:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:34:13.256 07:23:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:34:13.256 07:23:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:34:13.256 07:23:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:34:13.256 07:23:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:34:14.634 Initializing NVMe Controllers 00:34:14.634 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:34:14.634 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:34:14.634 Initialization complete. Launching workers. 00:34:14.634 ======================================================== 00:34:14.634 Latency(us) 00:34:14.634 Device Information : IOPS MiB/s Average min max 00:34:14.634 PCIE (0000:d8:00.0) NSID 1 from core 0: 93434.70 364.98 342.10 41.75 5210.80 00:34:14.634 ======================================================== 00:34:14.634 Total : 93434.70 364.98 342.10 41.75 5210.80 00:34:14.634 00:34:14.634 07:23:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:14.894 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.180 Initializing NVMe Controllers 00:34:18.180 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:18.180 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:18.180 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:18.180 Initialization complete. Launching workers. 00:34:18.180 ======================================================== 00:34:18.180 Latency(us) 00:34:18.180 Device Information : IOPS MiB/s Average min max 00:34:18.180 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6126.99 23.93 162.99 51.55 5035.88 00:34:18.180 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4706.99 18.39 212.24 83.87 5100.42 00:34:18.180 ======================================================== 00:34:18.180 Total : 10833.98 42.32 184.38 51.55 5100.42 00:34:18.180 00:34:18.180 07:23:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:18.180 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.467 Initializing NVMe Controllers 00:34:21.467 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:21.467 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:21.467 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:21.467 Initialization complete. Launching workers. 00:34:21.467 ======================================================== 00:34:21.467 Latency(us) 00:34:21.467 Device Information : IOPS MiB/s Average min max 00:34:21.467 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16430.92 64.18 1947.20 563.03 5593.72 00:34:21.467 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4017.80 15.69 7963.00 4938.45 10048.21 00:34:21.467 ======================================================== 00:34:21.467 Total : 20448.71 79.88 3129.19 563.03 10048.21 00:34:21.467 00:34:21.726 07:23:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:34:21.726 07:23:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:21.726 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.069 Initializing NVMe Controllers 00:34:27.069 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:27.069 Controller IO queue size 128, less than required. 00:34:27.069 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:27.069 Controller IO queue size 128, less than required. 00:34:27.069 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:27.069 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:27.069 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:27.069 Initialization complete. Launching workers. 00:34:27.069 ======================================================== 00:34:27.069 Latency(us) 00:34:27.069 Device Information : IOPS MiB/s Average min max 00:34:27.069 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3291.00 822.75 39264.89 15091.84 384564.09 00:34:27.069 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3475.50 868.87 37159.52 16021.36 377423.02 00:34:27.069 ======================================================== 00:34:27.069 Total : 6766.49 1691.62 38183.50 15091.84 384564.09 00:34:27.069 00:34:27.069 07:23:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:34:27.069 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.069 No valid NVMe controllers or AIO or URING devices found 00:34:27.069 Initializing NVMe Controllers 00:34:27.069 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:27.069 Controller IO queue size 128, less than required. 00:34:27.069 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:27.069 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:34:27.069 Controller IO queue size 128, less than required. 00:34:27.069 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:27.069 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:34:27.069 WARNING: Some requested NVMe devices were skipped 00:34:27.069 07:23:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:34:27.069 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.343 Initializing NVMe Controllers 00:34:32.343 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:32.343 Controller IO queue size 128, less than required. 00:34:32.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:32.343 Controller IO queue size 128, less than required. 00:34:32.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:32.343 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:32.343 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:32.343 Initialization complete. Launching workers. 00:34:32.343 00:34:32.343 ==================== 00:34:32.343 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:34:32.343 RDMA transport: 00:34:32.343 dev name: mlx5_0 00:34:32.343 polls: 315469 00:34:32.343 idle_polls: 313025 00:34:32.343 completions: 36518 00:34:32.343 queued_requests: 1 00:34:32.343 total_send_wrs: 18259 00:34:32.343 send_doorbell_updates: 2248 00:34:32.343 total_recv_wrs: 18386 00:34:32.343 recv_doorbell_updates: 2250 00:34:32.343 --------------------------------- 00:34:32.343 00:34:32.343 ==================== 00:34:32.343 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:34:32.343 RDMA transport: 00:34:32.344 dev name: mlx5_0 00:34:32.344 polls: 315660 00:34:32.344 idle_polls: 315418 00:34:32.344 completions: 17262 00:34:32.344 queued_requests: 1 00:34:32.344 total_send_wrs: 8631 00:34:32.344 send_doorbell_updates: 232 00:34:32.344 total_recv_wrs: 8758 00:34:32.344 recv_doorbell_updates: 233 00:34:32.344 --------------------------------- 00:34:32.344 ======================================================== 00:34:32.344 Latency(us) 00:34:32.344 Device Information : IOPS MiB/s Average min max 00:34:32.344 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4561.98 1140.50 28270.74 14842.09 234891.38 00:34:32.344 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2156.31 539.08 60369.45 31764.89 408592.77 00:34:32.344 ======================================================== 00:34:32.344 Total : 6718.29 1679.57 38573.17 14842.09 408592.77 00:34:32.344 00:34:32.344 07:23:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:34:32.344 07:23:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:32.344 07:23:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:34:32.344 07:23:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:34:32.344 07:23:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=023ed4f4-5dc1-4c5b-b70e-f66945532ad0 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 023ed4f4-5dc1-4c5b-b70e-f66945532ad0 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_uuid=023ed4f4-5dc1-4c5b-b70e-f66945532ad0 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_info 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local fc 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local cs 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:34:38.913 { 00:34:38.913 "uuid": "023ed4f4-5dc1-4c5b-b70e-f66945532ad0", 00:34:38.913 "name": "lvs_0", 00:34:38.913 "base_bdev": "Nvme0n1", 00:34:38.913 "total_data_clusters": 476466, 00:34:38.913 "free_clusters": 476466, 00:34:38.913 "block_size": 512, 00:34:38.913 "cluster_size": 4194304 00:34:38.913 } 00:34:38.913 ]' 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="023ed4f4-5dc1-4c5b-b70e-f66945532ad0") .free_clusters' 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # fc=476466 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="023ed4f4-5dc1-4c5b-b70e-f66945532ad0") .cluster_size' 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # cs=4194304 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # free_mb=1905864 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # echo 1905864 00:34:38.913 1905864 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:34:38.913 07:23:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 023ed4f4-5dc1-4c5b-b70e-f66945532ad0 lbd_0 20480 00:34:38.913 07:23:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=05bd4ae8-57ae-4b3e-ba3b-4b8fdf5a1909 00:34:38.913 07:23:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 05bd4ae8-57ae-4b3e-ba3b-4b8fdf5a1909 lvs_n_0 00:34:40.815 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1a21ab9f-81b6-40c6-a6e0-28e0c766b88f 00:34:40.815 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1a21ab9f-81b6-40c6-a6e0-28e0c766b88f 00:34:40.815 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_uuid=1a21ab9f-81b6-40c6-a6e0-28e0c766b88f 00:34:40.815 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_info 00:34:40.815 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local fc 00:34:40.815 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local cs 00:34:41.074 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:41.074 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:34:41.074 { 00:34:41.074 "uuid": "023ed4f4-5dc1-4c5b-b70e-f66945532ad0", 00:34:41.074 "name": "lvs_0", 00:34:41.074 "base_bdev": "Nvme0n1", 00:34:41.074 "total_data_clusters": 476466, 00:34:41.074 "free_clusters": 471346, 00:34:41.074 "block_size": 512, 00:34:41.074 "cluster_size": 4194304 00:34:41.074 }, 00:34:41.074 { 00:34:41.074 "uuid": "1a21ab9f-81b6-40c6-a6e0-28e0c766b88f", 00:34:41.074 "name": "lvs_n_0", 00:34:41.074 "base_bdev": "05bd4ae8-57ae-4b3e-ba3b-4b8fdf5a1909", 00:34:41.074 "total_data_clusters": 5114, 00:34:41.074 "free_clusters": 5114, 00:34:41.074 "block_size": 512, 00:34:41.074 "cluster_size": 4194304 00:34:41.074 } 00:34:41.074 ]' 00:34:41.074 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="1a21ab9f-81b6-40c6-a6e0-28e0c766b88f") .free_clusters' 00:34:41.074 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # fc=5114 00:34:41.074 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="1a21ab9f-81b6-40c6-a6e0-28e0c766b88f") .cluster_size' 00:34:41.333 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # cs=4194304 00:34:41.333 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # free_mb=20456 00:34:41.333 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # echo 20456 00:34:41.333 20456 00:34:41.333 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:34:41.333 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1a21ab9f-81b6-40c6-a6e0-28e0c766b88f lbd_nest_0 20456 00:34:41.333 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=724a2640-6319-4c07-b83c-2c6931cf810f 00:34:41.333 07:23:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:41.591 07:23:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:34:41.591 07:23:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 724a2640-6319-4c07-b83c-2c6931cf810f 00:34:41.850 07:23:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:42.109 07:23:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:34:42.109 07:23:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:34:42.109 07:23:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:34:42.109 07:23:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:42.109 07:23:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:42.109 EAL: No free 2048 kB hugepages reported on node 1 00:34:54.324 Initializing NVMe Controllers 00:34:54.325 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:54.325 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:54.325 Initialization complete. Launching workers. 00:34:54.325 ======================================================== 00:34:54.325 Latency(us) 00:34:54.325 Device Information : IOPS MiB/s Average min max 00:34:54.325 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5115.70 2.50 195.01 78.51 8005.88 00:34:54.325 ======================================================== 00:34:54.325 Total : 5115.70 2.50 195.01 78.51 8005.88 00:34:54.325 00:34:54.325 07:24:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:54.325 07:24:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:54.325 EAL: No free 2048 kB hugepages reported on node 1 00:35:06.565 Initializing NVMe Controllers 00:35:06.565 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:06.565 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:06.565 Initialization complete. Launching workers. 00:35:06.565 ======================================================== 00:35:06.565 Latency(us) 00:35:06.565 Device Information : IOPS MiB/s Average min max 00:35:06.565 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2472.16 309.02 404.26 174.25 8085.74 00:35:06.565 ======================================================== 00:35:06.565 Total : 2472.16 309.02 404.26 174.25 8085.74 00:35:06.565 00:35:06.565 07:24:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:35:06.565 07:24:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:06.565 07:24:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:06.565 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.548 Initializing NVMe Controllers 00:35:16.548 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:16.548 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:16.548 Initialization complete. Launching workers. 00:35:16.548 ======================================================== 00:35:16.548 Latency(us) 00:35:16.548 Device Information : IOPS MiB/s Average min max 00:35:16.548 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10212.10 4.99 3132.80 981.39 9320.48 00:35:16.548 ======================================================== 00:35:16.548 Total : 10212.10 4.99 3132.80 981.39 9320.48 00:35:16.548 00:35:16.548 07:24:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:16.548 07:24:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:16.548 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.759 Initializing NVMe Controllers 00:35:28.759 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:28.759 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:28.759 Initialization complete. Launching workers. 00:35:28.759 ======================================================== 00:35:28.759 Latency(us) 00:35:28.759 Device Information : IOPS MiB/s Average min max 00:35:28.759 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3961.19 495.15 8078.16 4914.40 29593.58 00:35:28.759 ======================================================== 00:35:28.759 Total : 3961.19 495.15 8078.16 4914.40 29593.58 00:35:28.759 00:35:28.759 07:24:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:35:28.759 07:24:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:28.759 07:24:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:28.759 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.964 Initializing NVMe Controllers 00:35:40.964 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:40.964 Controller IO queue size 128, less than required. 00:35:40.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:40.964 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:40.964 Initialization complete. Launching workers. 00:35:40.964 ======================================================== 00:35:40.964 Latency(us) 00:35:40.964 Device Information : IOPS MiB/s Average min max 00:35:40.964 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16593.15 8.10 7713.77 2139.36 15826.35 00:35:40.964 ======================================================== 00:35:40.964 Total : 16593.15 8.10 7713.77 2139.36 15826.35 00:35:40.964 00:35:40.964 07:24:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:40.964 07:24:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:40.964 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.004 Initializing NVMe Controllers 00:35:51.004 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:51.005 Controller IO queue size 128, less than required. 00:35:51.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:51.005 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:51.005 Initialization complete. Launching workers. 00:35:51.005 ======================================================== 00:35:51.005 Latency(us) 00:35:51.005 Device Information : IOPS MiB/s Average min max 00:35:51.005 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9784.33 1223.04 13078.45 3622.25 86723.30 00:35:51.005 ======================================================== 00:35:51.005 Total : 9784.33 1223.04 13078.45 3622.25 86723.30 00:35:51.005 00:35:51.005 07:25:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:51.264 07:25:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 724a2640-6319-4c07-b83c-2c6931cf810f 00:35:52.204 07:25:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:35:52.204 07:25:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 05bd4ae8-57ae-4b3e-ba3b-4b8fdf5a1909 00:35:52.463 07:25:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:52.722 rmmod nvme_rdma 00:35:52.722 rmmod nvme_fabrics 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1834767 ']' 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1834767 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1834767 ']' 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1834767 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1834767 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1834767' 00:35:52.722 killing process with pid 1834767 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1834767 00:35:52.722 07:25:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1834767 00:35:56.918 07:25:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:56.918 07:25:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:56.918 00:35:56.918 real 1m57.320s 00:35:56.918 user 7m15.972s 00:35:56.918 sys 0m9.288s 00:35:56.918 07:25:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:56.918 07:25:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:56.918 ************************************ 00:35:56.918 END TEST nvmf_perf 00:35:56.918 ************************************ 00:35:56.918 07:25:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:35:56.918 07:25:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:56.918 07:25:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:56.918 07:25:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.918 ************************************ 00:35:56.918 START TEST nvmf_fio_host 00:35:56.918 ************************************ 00:35:56.918 07:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:35:56.918 * Looking for test storage... 00:35:56.918 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:56.918 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:56.918 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:56.919 07:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:36:05.045 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:36:05.045 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:36:05.045 Found net devices under 0000:d9:00.0: mlx_0_0 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:05.045 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:36:05.046 Found net devices under 0000:d9:00.1: mlx_0_1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:36:05.046 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:05.046 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:36:05.046 altname enp217s0f0np0 00:36:05.046 altname ens818f0np0 00:36:05.046 inet 192.168.100.8/24 scope global mlx_0_0 00:36:05.046 valid_lft forever preferred_lft forever 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:36:05.046 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:05.046 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:36:05.046 altname enp217s0f1np1 00:36:05.046 altname ens818f1np1 00:36:05.046 inet 192.168.100.9/24 scope global mlx_0_1 00:36:05.046 valid_lft forever preferred_lft forever 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:05.046 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:36:05.047 192.168.100.9' 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:36:05.047 192.168.100.9' 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:36:05.047 192.168.100.9' 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1856792 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1856792 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1856792 ']' 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:05.047 07:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.047 [2024-07-24 07:25:18.868380] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:36:05.047 [2024-07-24 07:25:18.868470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:05.047 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.047 [2024-07-24 07:25:19.016798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:05.047 [2024-07-24 07:25:19.217607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:05.047 [2024-07-24 07:25:19.217660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:05.047 [2024-07-24 07:25:19.217675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:05.047 [2024-07-24 07:25:19.217686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:05.047 [2024-07-24 07:25:19.217697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:05.047 [2024-07-24 07:25:19.217874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.047 [2024-07-24 07:25:19.217969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:05.047 [2024-07-24 07:25:19.218058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.047 [2024-07-24 07:25:19.218074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:05.047 07:25:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:05.047 07:25:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:36:05.047 07:25:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:36:05.306 [2024-07-24 07:25:19.836515] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f8eb1593940) succeed. 00:36:05.306 [2024-07-24 07:25:19.846068] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f8eb154f940) succeed. 00:36:05.566 07:25:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:36:05.566 07:25:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:05.566 07:25:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.825 07:25:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:36:06.084 Malloc1 00:36:06.085 07:25:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:06.085 07:25:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:06.343 07:25:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:06.602 [2024-07-24 07:25:21.011197] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:36:06.602 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:36:06.892 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:06.892 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:06.892 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # break 00:36:06.892 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:06.892 07:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:07.162 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:07.162 fio-3.35 00:36:07.162 Starting 1 thread 00:36:07.162 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.699 00:36:09.699 test: (groupid=0, jobs=1): err= 0: pid=1857468: Wed Jul 24 07:25:24 2024 00:36:09.699 read: IOPS=15.4k, BW=60.2MiB/s (63.1MB/s)(121MiB/2004msec) 00:36:09.699 slat (nsec): min=1564, max=41605, avg=1749.15, stdev=666.33 00:36:09.699 clat (usec): min=3058, max=7466, avg=4129.83, stdev=123.15 00:36:09.699 lat (usec): min=3064, max=7467, avg=4131.58, stdev=123.19 00:36:09.699 clat percentiles (usec): 00:36:09.699 | 1.00th=[ 3720], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:36:09.699 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4146], 00:36:09.699 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4146], 95.00th=[ 4178], 00:36:09.699 | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5866], 99.95th=[ 6456], 00:36:09.699 | 99.99th=[ 7439] 00:36:09.699 bw ( KiB/s): min=60464, max=62536, per=99.99%, avg=61652.00, stdev=1004.62, samples=4 00:36:09.699 iops : min=15116, max=15634, avg=15413.00, stdev=251.15, samples=4 00:36:09.699 write: IOPS=15.4k, BW=60.3MiB/s (63.2MB/s)(121MiB/2004msec); 0 zone resets 00:36:09.699 slat (nsec): min=1616, max=120765, avg=1843.41, stdev=1036.68 00:36:09.699 clat (usec): min=3035, max=7480, avg=4127.41, stdev=124.50 00:36:09.699 lat (usec): min=3040, max=7481, avg=4129.25, stdev=124.57 00:36:09.699 clat percentiles (usec): 00:36:09.699 | 1.00th=[ 3720], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:36:09.699 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4146], 00:36:09.699 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4146], 95.00th=[ 4178], 00:36:09.699 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 5866], 99.95th=[ 6915], 00:36:09.699 | 99.99th=[ 7439] 00:36:09.699 bw ( KiB/s): min=60848, max=62584, per=100.00%, avg=61698.00, stdev=710.11, samples=4 00:36:09.699 iops : min=15212, max=15646, avg=15424.50, stdev=177.53, samples=4 00:36:09.699 lat (msec) : 4=1.95%, 10=98.05% 00:36:09.699 cpu : usr=99.35%, sys=0.25%, ctx=15, majf=0, minf=1303 00:36:09.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:36:09.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:09.699 issued rwts: total=30891,30911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:09.699 00:36:09.699 Run status group 0 (all jobs): 00:36:09.699 READ: bw=60.2MiB/s (63.1MB/s), 60.2MiB/s-60.2MiB/s (63.1MB/s-63.1MB/s), io=121MiB (127MB), run=2004-2004msec 00:36:09.699 WRITE: bw=60.3MiB/s (63.2MB/s), 60.3MiB/s-60.3MiB/s (63.2MB/s-63.2MB/s), io=121MiB (127MB), run=2004-2004msec 00:36:09.699 ----------------------------------------------------- 00:36:09.699 Suppressions used: 00:36:09.699 count bytes template 00:36:09.699 1 63 /usr/src/fio/parse.c 00:36:09.699 1 8 libtcmalloc_minimal.so 00:36:09.699 ----------------------------------------------------- 00:36:09.699 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # break 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:09.699 07:25:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:36:10.280 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:36:10.280 fio-3.35 00:36:10.280 Starting 1 thread 00:36:10.280 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.851 00:36:12.851 test: (groupid=0, jobs=1): err= 0: pid=1858120: Wed Jul 24 07:25:27 2024 00:36:12.851 read: IOPS=12.4k, BW=193MiB/s (202MB/s)(380MiB/1967msec) 00:36:12.851 slat (nsec): min=2557, max=44703, avg=2971.06, stdev=1034.70 00:36:12.851 clat (usec): min=569, max=8946, avg=1951.14, stdev=1623.55 00:36:12.851 lat (usec): min=572, max=8949, avg=1954.12, stdev=1623.86 00:36:12.851 clat percentiles (usec): 00:36:12.851 | 1.00th=[ 791], 5.00th=[ 898], 10.00th=[ 963], 20.00th=[ 1057], 00:36:12.851 | 30.00th=[ 1139], 40.00th=[ 1237], 50.00th=[ 1352], 60.00th=[ 1483], 00:36:12.851 | 70.00th=[ 1631], 80.00th=[ 1844], 90.00th=[ 5735], 95.00th=[ 5800], 00:36:12.851 | 99.00th=[ 7504], 99.50th=[ 8094], 99.90th=[ 8586], 99.95th=[ 8717], 00:36:12.851 | 99.99th=[ 8848] 00:36:12.851 bw ( KiB/s): min=92710, max=98176, per=48.52%, avg=95921.50, stdev=2307.21, samples=4 00:36:12.851 iops : min= 5794, max= 6136, avg=5995.00, stdev=144.37, samples=4 00:36:12.851 write: IOPS=7051, BW=110MiB/s (116MB/s)(195MiB/1773msec); 0 zone resets 00:36:12.851 slat (usec): min=26, max=171, avg=29.23, stdev= 3.82 00:36:12.851 clat (usec): min=5115, max=22768, avg=14790.12, stdev=2027.70 00:36:12.851 lat (usec): min=5144, max=22798, avg=14819.35, stdev=2027.51 00:36:12.851 clat percentiles (usec): 00:36:12.851 | 1.00th=[ 8291], 5.00th=[11731], 10.00th=[12518], 20.00th=[13435], 00:36:12.851 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:36:12.852 | 70.00th=[15664], 80.00th=[16319], 90.00th=[17171], 95.00th=[17957], 00:36:12.852 | 99.00th=[19792], 99.50th=[20579], 99.90th=[22152], 99.95th=[22414], 00:36:12.852 | 99.99th=[22676] 00:36:12.852 bw ( KiB/s): min=96351, max=101056, per=87.78%, avg=99039.75, stdev=2105.79, samples=4 00:36:12.852 iops : min= 6021, max= 6316, avg=6189.75, stdev=132.01, samples=4 00:36:12.852 lat (usec) : 750=0.28%, 1000=8.67% 00:36:12.852 lat (msec) : 2=46.28%, 4=2.05%, 10=9.31%, 20=33.13%, 50=0.28% 00:36:12.852 cpu : usr=95.71%, sys=2.64%, ctx=189, majf=0, minf=11210 00:36:12.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:36:12.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:12.852 issued rwts: total=24304,12503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.852 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:12.852 00:36:12.852 Run status group 0 (all jobs): 00:36:12.852 READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=380MiB (398MB), run=1967-1967msec 00:36:12.852 WRITE: bw=110MiB/s (116MB/s), 110MiB/s-110MiB/s (116MB/s-116MB/s), io=195MiB (205MB), run=1773-1773msec 00:36:12.852 ----------------------------------------------------- 00:36:12.852 Suppressions used: 00:36:12.852 count bytes template 00:36:12.852 1 63 /usr/src/fio/parse.c 00:36:12.852 205 19680 /usr/src/fio/iolog.c 00:36:12.852 1 8 libtcmalloc_minimal.so 00:36:12.852 ----------------------------------------------------- 00:36:12.852 00:36:12.852 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1511 -- # bdfs=() 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1511 -- # local bdfs 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:d8:00.0 00:36:13.109 07:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:36:16.383 Nvme0n1 00:36:16.383 07:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:36:21.637 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=432cdc50-aeed-41a7-9788-c567419b0a9d 00:36:21.637 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 432cdc50-aeed-41a7-9788-c567419b0a9d 00:36:21.637 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_uuid=432cdc50-aeed-41a7-9788-c567419b0a9d 00:36:21.637 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_info 00:36:21.637 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local fc 00:36:21.637 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local cs 00:36:21.637 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:21.893 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:36:21.893 { 00:36:21.893 "uuid": "432cdc50-aeed-41a7-9788-c567419b0a9d", 00:36:21.893 "name": "lvs_0", 00:36:21.893 "base_bdev": "Nvme0n1", 00:36:21.893 "total_data_clusters": 1862, 00:36:21.893 "free_clusters": 1862, 00:36:21.893 "block_size": 512, 00:36:21.893 "cluster_size": 1073741824 00:36:21.893 } 00:36:21.893 ]' 00:36:21.893 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="432cdc50-aeed-41a7-9788-c567419b0a9d") .free_clusters' 00:36:21.893 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # fc=1862 00:36:21.893 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="432cdc50-aeed-41a7-9788-c567419b0a9d") .cluster_size' 00:36:22.149 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # cs=1073741824 00:36:22.149 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # free_mb=1906688 00:36:22.149 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # echo 1906688 00:36:22.149 1906688 00:36:22.149 07:25:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:36:22.405 cae03f82-ad5e-4e59-a20f-3a9f0c38be46 00:36:22.662 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:36:22.662 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:36:22.919 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:36:23.175 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:23.175 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:23.175 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:36:23.175 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:23.175 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:36:23.175 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # break 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:23.176 07:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:23.432 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:23.432 fio-3.35 00:36:23.432 Starting 1 thread 00:36:23.689 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.210 00:36:26.210 test: (groupid=0, jobs=1): err= 0: pid=1860396: Wed Jul 24 07:25:40 2024 00:36:26.210 read: IOPS=8630, BW=33.7MiB/s (35.3MB/s)(67.6MiB/2005msec) 00:36:26.210 slat (nsec): min=1589, max=122348, avg=1803.95, stdev=1039.52 00:36:26.210 clat (usec): min=196, max=333225, avg=7359.60, stdev=19925.19 00:36:26.211 lat (usec): min=201, max=333231, avg=7361.40, stdev=19925.27 00:36:26.211 clat percentiles (msec): 00:36:26.211 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 7], 00:36:26.211 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:36:26.211 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:36:26.211 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 334], 99.95th=[ 334], 00:36:26.211 | 99.99th=[ 334] 00:36:26.211 bw ( KiB/s): min=12992, max=42032, per=99.88%, avg=34480.00, stdev=14330.73, samples=4 00:36:26.211 iops : min= 3248, max=10508, avg=8620.00, stdev=3582.68, samples=4 00:36:26.211 write: IOPS=8626, BW=33.7MiB/s (35.3MB/s)(67.6MiB/2005msec); 0 zone resets 00:36:26.211 slat (nsec): min=1627, max=17804, avg=1886.71, stdev=449.13 00:36:26.211 clat (usec): min=178, max=333577, avg=7326.68, stdev=19396.28 00:36:26.211 lat (usec): min=180, max=333581, avg=7328.57, stdev=19396.39 00:36:26.211 clat percentiles (msec): 00:36:26.211 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:36:26.211 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:36:26.211 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:36:26.211 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 334], 99.95th=[ 334], 00:36:26.211 | 99.99th=[ 334] 00:36:26.211 bw ( KiB/s): min=13448, max=41672, per=99.88%, avg=34468.00, stdev=14014.07, samples=4 00:36:26.211 iops : min= 3362, max=10418, avg=8617.00, stdev=3503.52, samples=4 00:36:26.211 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:36:26.211 lat (msec) : 2=0.02%, 4=0.16%, 10=99.34%, 20=0.06%, 500=0.37% 00:36:26.211 cpu : usr=99.45%, sys=0.20%, ctx=16, majf=0, minf=1698 00:36:26.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:36:26.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:26.211 issued rwts: total=17304,17297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:26.211 00:36:26.211 Run status group 0 (all jobs): 00:36:26.211 READ: bw=33.7MiB/s (35.3MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=67.6MiB (70.9MB), run=2005-2005msec 00:36:26.211 WRITE: bw=33.7MiB/s (35.3MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=67.6MiB (70.8MB), run=2005-2005msec 00:36:26.211 ----------------------------------------------------- 00:36:26.211 Suppressions used: 00:36:26.211 count bytes template 00:36:26.211 1 64 /usr/src/fio/parse.c 00:36:26.211 1 8 libtcmalloc_minimal.so 00:36:26.211 ----------------------------------------------------- 00:36:26.211 00:36:26.211 07:25:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:26.468 07:25:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=bc644a08-8b5e-4b46-a8ce-2d3a101e8faa 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb bc644a08-8b5e-4b46-a8ce-2d3a101e8faa 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_uuid=bc644a08-8b5e-4b46-a8ce-2d3a101e8faa 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_info 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local fc 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local cs 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:36:27.835 { 00:36:27.835 "uuid": "432cdc50-aeed-41a7-9788-c567419b0a9d", 00:36:27.835 "name": "lvs_0", 00:36:27.835 "base_bdev": "Nvme0n1", 00:36:27.835 "total_data_clusters": 1862, 00:36:27.835 "free_clusters": 0, 00:36:27.835 "block_size": 512, 00:36:27.835 "cluster_size": 1073741824 00:36:27.835 }, 00:36:27.835 { 00:36:27.835 "uuid": "bc644a08-8b5e-4b46-a8ce-2d3a101e8faa", 00:36:27.835 "name": "lvs_n_0", 00:36:27.835 "base_bdev": "cae03f82-ad5e-4e59-a20f-3a9f0c38be46", 00:36:27.835 "total_data_clusters": 476206, 00:36:27.835 "free_clusters": 476206, 00:36:27.835 "block_size": 512, 00:36:27.835 "cluster_size": 4194304 00:36:27.835 } 00:36:27.835 ]' 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="bc644a08-8b5e-4b46-a8ce-2d3a101e8faa") .free_clusters' 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # fc=476206 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="bc644a08-8b5e-4b46-a8ce-2d3a101e8faa") .cluster_size' 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # cs=4194304 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # free_mb=1904824 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # echo 1904824 00:36:27.835 1904824 00:36:27.835 07:25:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:36:30.356 bc3ab8e5-742e-4f1b-9e49-27ea1335574a 00:36:30.356 07:25:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:36:30.613 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # break 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:30.870 07:25:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:31.437 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:31.437 fio-3.35 00:36:31.437 Starting 1 thread 00:36:31.437 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.024 00:36:34.024 test: (groupid=0, jobs=1): err= 0: pid=1861817: Wed Jul 24 07:25:48 2024 00:36:34.024 read: IOPS=8694, BW=34.0MiB/s (35.6MB/s)(68.2MiB/2007msec) 00:36:34.024 slat (nsec): min=1561, max=25032, avg=1772.30, stdev=403.07 00:36:34.024 clat (usec): min=3332, max=12169, avg=7273.27, stdev=212.35 00:36:34.024 lat (usec): min=3336, max=12171, avg=7275.04, stdev=212.31 00:36:34.024 clat percentiles (usec): 00:36:34.024 | 1.00th=[ 7111], 5.00th=[ 7177], 10.00th=[ 7177], 20.00th=[ 7242], 00:36:34.024 | 30.00th=[ 7242], 40.00th=[ 7242], 50.00th=[ 7242], 60.00th=[ 7308], 00:36:34.024 | 70.00th=[ 7308], 80.00th=[ 7308], 90.00th=[ 7308], 95.00th=[ 7373], 00:36:34.024 | 99.00th=[ 7635], 99.50th=[ 7701], 99.90th=[10290], 99.95th=[11731], 00:36:34.024 | 99.99th=[12125] 00:36:34.024 bw ( KiB/s): min=32976, max=35592, per=99.99%, avg=34774.00, stdev=1211.14, samples=4 00:36:34.024 iops : min= 8244, max= 8898, avg=8693.50, stdev=302.78, samples=4 00:36:34.024 write: IOPS=8687, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2007msec); 0 zone resets 00:36:34.024 slat (nsec): min=1609, max=23328, avg=1902.09, stdev=424.60 00:36:34.024 clat (usec): min=3324, max=12162, avg=7300.08, stdev=223.02 00:36:34.024 lat (usec): min=3331, max=12164, avg=7301.98, stdev=222.99 00:36:34.024 clat percentiles (usec): 00:36:34.024 | 1.00th=[ 7177], 5.00th=[ 7242], 10.00th=[ 7242], 20.00th=[ 7242], 00:36:34.024 | 30.00th=[ 7242], 40.00th=[ 7308], 50.00th=[ 7308], 60.00th=[ 7308], 00:36:34.024 | 70.00th=[ 7308], 80.00th=[ 7308], 90.00th=[ 7373], 95.00th=[ 7373], 00:36:34.024 | 99.00th=[ 7635], 99.50th=[ 7701], 99.90th=[11731], 99.95th=[12125], 00:36:34.024 | 99.99th=[12125] 00:36:34.025 bw ( KiB/s): min=33888, max=35232, per=99.98%, avg=34742.00, stdev=596.21, samples=4 00:36:34.025 iops : min= 8472, max= 8808, avg=8685.50, stdev=149.05, samples=4 00:36:34.025 lat (msec) : 4=0.02%, 10=99.82%, 20=0.16% 00:36:34.025 cpu : usr=99.45%, sys=0.15%, ctx=15, majf=0, minf=1669 00:36:34.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:36:34.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:34.025 issued rwts: total=17449,17436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.025 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:34.025 00:36:34.025 Run status group 0 (all jobs): 00:36:34.025 READ: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=68.2MiB (71.5MB), run=2007-2007msec 00:36:34.025 WRITE: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2007-2007msec 00:36:34.025 ----------------------------------------------------- 00:36:34.025 Suppressions used: 00:36:34.025 count bytes template 00:36:34.025 1 64 /usr/src/fio/parse.c 00:36:34.025 1 8 libtcmalloc_minimal.so 00:36:34.025 ----------------------------------------------------- 00:36:34.025 00:36:34.025 07:25:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:36:34.283 07:25:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:36:34.283 07:25:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:36:44.244 07:25:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:36:44.244 07:25:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:36:49.499 07:26:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:36:49.499 07:26:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:36:52.775 rmmod nvme_rdma 00:36:52.775 rmmod nvme_fabrics 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1856792 ']' 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1856792 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1856792 ']' 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1856792 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1856792 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1856792' 00:36:52.775 killing process with pid 1856792 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1856792 00:36:52.775 07:26:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1856792 00:36:54.670 07:26:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:54.670 07:26:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:36:54.670 00:36:54.670 real 0m57.984s 00:36:54.670 user 4m1.024s 00:36:54.670 sys 0m12.204s 00:36:54.670 07:26:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:54.670 07:26:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.670 ************************************ 00:36:54.670 END TEST nvmf_fio_host 00:36:54.670 ************************************ 00:36:54.670 07:26:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:36:54.670 07:26:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:54.670 07:26:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:54.670 07:26:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.670 ************************************ 00:36:54.670 START TEST nvmf_failover 00:36:54.670 ************************************ 00:36:54.670 07:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:36:54.670 * Looking for test storage... 00:36:54.670 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:36:54.670 07:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:37:02.778 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:37:02.778 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:37:02.778 Found net devices under 0000:d9:00.0: mlx_0_0 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:37:02.778 Found net devices under 0000:d9:00.1: mlx_0_1 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:37:02.778 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:37:02.779 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:02.779 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:37:02.779 altname enp217s0f0np0 00:37:02.779 altname ens818f0np0 00:37:02.779 inet 192.168.100.8/24 scope global mlx_0_0 00:37:02.779 valid_lft forever preferred_lft forever 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:37:02.779 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:02.779 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:37:02.779 altname enp217s0f1np1 00:37:02.779 altname ens818f1np1 00:37:02.779 inet 192.168.100.9/24 scope global mlx_0_1 00:37:02.779 valid_lft forever preferred_lft forever 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:37:02.779 192.168.100.9' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:37:02.779 192.168.100.9' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:37:02.779 192.168.100.9' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1869465 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1869465 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1869465 ']' 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:02.779 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.780 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:02.780 07:26:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:03.063 [2024-07-24 07:26:17.425903] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:37:03.063 [2024-07-24 07:26:17.426009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:03.063 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.063 [2024-07-24 07:26:17.574491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:03.334 [2024-07-24 07:26:17.783240] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:03.334 [2024-07-24 07:26:17.783284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:03.334 [2024-07-24 07:26:17.783301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:03.334 [2024-07-24 07:26:17.783312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:03.334 [2024-07-24 07:26:17.783323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:03.334 [2024-07-24 07:26:17.783471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:03.334 [2024-07-24 07:26:17.783542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.334 [2024-07-24 07:26:17.783556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:03.592 07:26:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:03.592 07:26:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:37:03.592 07:26:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:03.592 07:26:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:03.592 07:26:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:03.849 07:26:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:03.849 07:26:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:37:03.849 [2024-07-24 07:26:18.451196] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fd247911940) succeed. 00:37:03.849 [2024-07-24 07:26:18.460675] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fd2478cd940) succeed. 00:37:04.107 07:26:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:04.364 Malloc0 00:37:04.364 07:26:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:04.621 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:04.879 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:04.879 [2024-07-24 07:26:19.484201] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:05.136 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:37:05.136 [2024-07-24 07:26:19.652566] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:37:05.136 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:37:05.393 [2024-07-24 07:26:19.825179] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1869783 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1869783 /var/tmp/bdevperf.sock 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1869783 ']' 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:05.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:05.393 07:26:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:06.324 07:26:20 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:06.324 07:26:20 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:37:06.324 07:26:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:06.581 NVMe0n1 00:37:06.581 07:26:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:06.581 00:37:06.837 07:26:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:06.837 07:26:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1870050 00:37:06.837 07:26:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:37:07.769 07:26:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:08.026 07:26:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:37:11.297 07:26:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:11.297 00:37:11.297 07:26:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:37:11.297 07:26:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:37:14.575 07:26:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:14.575 [2024-07-24 07:26:29.027067] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:14.575 07:26:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:37:15.507 07:26:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:37:15.764 07:26:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1870050 00:37:22.314 0 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1869783 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1869783 ']' 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1869783 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1869783 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1869783' 00:37:22.314 killing process with pid 1869783 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1869783 00:37:22.314 07:26:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1869783 00:37:23.274 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:23.274 [2024-07-24 07:26:19.930340] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:37:23.274 [2024-07-24 07:26:19.930455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869783 ] 00:37:23.274 EAL: No free 2048 kB hugepages reported on node 1 00:37:23.274 [2024-07-24 07:26:20.080632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.274 [2024-07-24 07:26:20.296836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.274 Running I/O for 15 seconds... 00:37:23.274 [2024-07-24 07:26:23.392371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.274 [2024-07-24 07:26:23.392433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.274 [2024-07-24 07:26:23.392470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.274 [2024-07-24 07:26:23.392485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.274 [2024-07-24 07:26:23.392503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.274 [2024-07-24 07:26:23.392516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.274 [2024-07-24 07:26:23.392534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.274 [2024-07-24 07:26:23.392547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.392985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.392998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.275 [2024-07-24 07:26:23.393323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000074ff000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007501000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007503000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007505000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007507000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007509000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750b000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750d000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750f000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007511000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.275 [2024-07-24 07:26:23.393650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007513000 len:0x1000 key:0x183e00 00:37:23.275 [2024-07-24 07:26:23.393663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007515000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007517000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751b000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751d000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751f000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007521000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007523000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007525000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007527000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.393975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.393999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007529000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752b000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752d000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752f000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007531000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007533000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007535000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007537000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007539000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753b000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753d000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007541000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007543000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007545000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007547000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007549000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754b000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754d000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754f000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007551000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007553000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007555000 len:0x1000 key:0x183e00 00:37:23.276 [2024-07-24 07:26:23.394678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.276 [2024-07-24 07:26:23.394694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007557000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007559000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755b000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755d000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755f000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007561000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007563000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007565000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007567000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007569000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.394978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.394994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756b000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756d000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756f000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007571000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007573000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007575000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007577000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007579000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757b000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757d000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757f000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007581000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007583000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007585000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007587000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007589000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758d000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007591000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x183e00 00:37:23.277 [2024-07-24 07:26:23.395699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.277 [2024-07-24 07:26:23.395715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.395728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.395744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.395757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.395776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759f000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.395789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.395806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.395819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.395835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a3000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.395848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.395865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.395877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.395895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a7000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.395908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.395930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a9000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.395942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.395959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.395971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.395988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b3000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b7000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bb000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bd000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.396298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:23.396311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.398316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:23.278 [2024-07-24 07:26:23.398340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:23.278 [2024-07-24 07:26:23.398356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6928 len:8 PRP1 0x0 PRP2 0x0 00:37:23.278 [2024-07-24 07:26:23.398370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:23.398559] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20000b1ff140 was disconnected and freed. reset controller. 00:37:23.278 [2024-07-24 07:26:23.398583] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:37:23.278 [2024-07-24 07:26:23.398607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:23.278 [2024-07-24 07:26:23.401660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:23.278 [2024-07-24 07:26:23.429673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:37:23.278 [2024-07-24 07:26:23.477799] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:23.278 [2024-07-24 07:26:26.853169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.278 [2024-07-24 07:26:26.853231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.278 [2024-07-24 07:26:26.853290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007553000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:26.853322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:26.853355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759f000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:26.853386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:26.853419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:26.853450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:26.853482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:26.853514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x183e00 00:37:23.278 [2024-07-24 07:26:26.853550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.278 [2024-07-24 07:26:26.853568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.853980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.853996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.279 [2024-07-24 07:26:26.854306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007575000 len:0x1000 key:0x183e00 00:37:23.279 [2024-07-24 07:26:26.854336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007573000 len:0x1000 key:0x183e00 00:37:23.279 [2024-07-24 07:26:26.854365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x183e00 00:37:23.279 [2024-07-24 07:26:26.854393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x183e00 00:37:23.279 [2024-07-24 07:26:26.854423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.279 [2024-07-24 07:26:26.854439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bd000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bb000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b7000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.854581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.854611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.854655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.854686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.854715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.854744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.854777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007585000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007577000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007579000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757b000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757d000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757f000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.854972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007581000 len:0x1000 key:0x183e00 00:37:23.280 [2024-07-24 07:26:26.854984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.280 [2024-07-24 07:26:26.855520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.280 [2024-07-24 07:26:26.855536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.855549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.855578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.855607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.855641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.855672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.855701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007523000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.855733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007535000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.855763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007537000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.855794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007539000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.855824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753b000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.855853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753d000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.855882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.855912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007541000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.855942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007543000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.855974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.855991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007545000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.856004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007567000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.856034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007569000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.856063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756b000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.856092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756d000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.856122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756f000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.856152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007571000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.856181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x183e00 00:37:23.281 [2024-07-24 07:26:26.856213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.281 [2024-07-24 07:26:26.856606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.281 [2024-07-24 07:26:26.856619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:26.856940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007515000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:26.856968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.856982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007517000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:26.856997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.857011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:26.857024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.857038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751b000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:26.857051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.857066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751d000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:26.857079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.859238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:23.282 [2024-07-24 07:26:26.859262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:23.282 [2024-07-24 07:26:26.859275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54496 len:8 PRP1 0x0 PRP2 0x0 00:37:23.282 [2024-07-24 07:26:26.859289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:26.859457] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000137ff100 was disconnected and freed. reset controller. 00:37:23.282 [2024-07-24 07:26:26.859476] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:37:23.282 [2024-07-24 07:26:26.859491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:23.282 [2024-07-24 07:26:26.862501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:23.282 [2024-07-24 07:26:26.890681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:37:23.282 [2024-07-24 07:26:26.939445] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:23.282 [2024-07-24 07:26:31.230459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753b000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:31.230519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753d000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:31.230571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:31.230600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:31.230634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:31.230662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:31.230690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:31.230718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:31.230746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:31.230774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:31.230802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.282 [2024-07-24 07:26:31.230831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c3000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:31.230858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007591000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:31.230886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:31.230917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.282 [2024-07-24 07:26:31.230932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758d000 len:0x1000 key:0x183e00 00:37:23.282 [2024-07-24 07:26:31.230947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.230962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.230975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.230990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007589000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007587000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007565000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007571000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007515000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007517000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751b000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751d000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751f000 len:0x1000 key:0x183e00 00:37:23.283 [2024-07-24 07:26:31.231276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.283 [2024-07-24 07:26:31.231909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.283 [2024-07-24 07:26:31.231922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.231936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.231949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.231963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.231975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.231989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b3000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a9000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a7000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.284 [2024-07-24 07:26:31.232561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007555000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007557000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007559000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755b000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755d000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755f000 len:0x1000 key:0x183e00 00:37:23.284 [2024-07-24 07:26:31.232727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.284 [2024-07-24 07:26:31.232742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007561000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.232754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.232769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007563000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.232781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.232795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.232808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.232824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.232838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.232852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.232864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.232880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.232893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.232907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.232919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.232933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.232946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.232961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.232973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.232987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.233000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.233027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.233053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.233082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.233107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.233135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.233161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.233189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.285 [2024-07-24 07:26:31.233217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007525000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007527000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007529000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752b000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752d000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752f000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007531000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007533000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b7000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007585000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007577000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007579000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757b000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757d000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757f000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.285 [2024-07-24 07:26:31.233677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a3000 len:0x1000 key:0x183e00 00:37:23.285 [2024-07-24 07:26:31.233690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007521000 len:0x1000 key:0x183e00 00:37:23.286 [2024-07-24 07:26:31.233717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007553000 len:0x1000 key:0x183e00 00:37:23.286 [2024-07-24 07:26:31.233745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x183e00 00:37:23.286 [2024-07-24 07:26:31.233773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759f000 len:0x1000 key:0x183e00 00:37:23.286 [2024-07-24 07:26:31.233799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x183e00 00:37:23.286 [2024-07-24 07:26:31.233826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x183e00 00:37:23.286 [2024-07-24 07:26:31.233853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x183e00 00:37:23.286 [2024-07-24 07:26:31.233883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.286 [2024-07-24 07:26:31.233910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.286 [2024-07-24 07:26:31.233936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.286 [2024-07-24 07:26:31.233964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.233980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.286 [2024-07-24 07:26:31.233992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.236017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:23.286 [2024-07-24 07:26:31.236037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:23.286 [2024-07-24 07:26:31.236051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91728 len:8 PRP1 0x0 PRP2 0x0 00:37:23.286 [2024-07-24 07:26:31.236064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.286 [2024-07-24 07:26:31.236218] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20000b1ff540 was disconnected and freed. reset controller. 00:37:23.286 [2024-07-24 07:26:31.236236] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:37:23.286 [2024-07-24 07:26:31.236251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:23.286 [2024-07-24 07:26:31.239290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:23.286 [2024-07-24 07:26:31.267039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:37:23.286 [2024-07-24 07:26:31.312807] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:23.286 00:37:23.286 Latency(us) 00:37:23.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.286 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:23.286 Verification LBA range: start 0x0 length 0x4000 00:37:23.286 NVMe0n1 : 15.01 12466.56 48.70 300.50 0.00 10000.00 484.97 1020054.73 00:37:23.286 =================================================================================================================== 00:37:23.286 Total : 12466.56 48.70 300.50 0.00 10000.00 484.97 1020054.73 00:37:23.286 Received shutdown signal, test time was about 15.000000 seconds 00:37:23.286 00:37:23.286 Latency(us) 00:37:23.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.286 =================================================================================================================== 00:37:23.286 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1872705 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1872705 /var/tmp/bdevperf.sock 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1872705 ']' 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:23.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:23.286 07:26:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:23.870 07:26:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:23.870 07:26:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:37:23.871 07:26:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:37:24.130 [2024-07-24 07:26:38.634066] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:37:24.130 07:26:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:37:24.389 [2024-07-24 07:26:38.810631] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:37:24.389 07:26:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:24.648 NVMe0n1 00:37:24.648 07:26:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:24.907 00:37:24.907 07:26:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:24.907 00:37:25.166 07:26:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:25.166 07:26:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:37:25.166 07:26:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:25.425 07:26:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:37:28.716 07:26:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:28.716 07:26:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:37:28.716 07:26:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1873606 00:37:28.716 07:26:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:28.716 07:26:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1873606 00:37:29.653 0 00:37:29.653 07:26:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:29.653 [2024-07-24 07:26:37.687751] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:37:29.653 [2024-07-24 07:26:37.687868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872705 ] 00:37:29.653 EAL: No free 2048 kB hugepages reported on node 1 00:37:29.653 [2024-07-24 07:26:37.835503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.653 [2024-07-24 07:26:38.056098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.653 [2024-07-24 07:26:39.896690] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:37:29.653 [2024-07-24 07:26:39.897336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:29.653 [2024-07-24 07:26:39.897392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:29.653 [2024-07-24 07:26:39.927890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:37:29.653 [2024-07-24 07:26:39.950777] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:29.653 Running I/O for 1 seconds... 00:37:29.653 00:37:29.653 Latency(us) 00:37:29.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.653 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:29.653 Verification LBA range: start 0x0 length 0x4000 00:37:29.653 NVMe0n1 : 1.01 15663.43 61.19 0.00 0.00 8126.05 3224.37 20342.37 00:37:29.653 =================================================================================================================== 00:37:29.653 Total : 15663.43 61.19 0.00 0.00 8126.05 3224.37 20342.37 00:37:29.653 07:26:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:29.653 07:26:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:37:29.912 07:26:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:30.172 07:26:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:30.172 07:26:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:37:30.432 07:26:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:30.432 07:26:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1872705 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1872705 ']' 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1872705 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1872705 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1872705' 00:37:33.723 killing process with pid 1872705 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1872705 00:37:33.723 07:26:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1872705 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:37:35.102 rmmod nvme_rdma 00:37:35.102 rmmod nvme_fabrics 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1869465 ']' 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1869465 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1869465 ']' 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1869465 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1869465 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1869465' 00:37:35.102 killing process with pid 1869465 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1869465 00:37:35.102 07:26:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1869465 00:37:37.009 07:26:51 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:37.009 07:26:51 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:37:37.009 00:37:37.009 real 0m42.569s 00:37:37.009 user 2m14.735s 00:37:37.009 sys 0m9.314s 00:37:37.009 07:26:51 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:37.009 07:26:51 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:37.009 ************************************ 00:37:37.009 END TEST nvmf_failover 00:37:37.010 ************************************ 00:37:37.010 07:26:51 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:37:37.010 07:26:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:37.010 07:26:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:37.010 07:26:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.010 ************************************ 00:37:37.010 START TEST nvmf_host_discovery 00:37:37.010 ************************************ 00:37:37.010 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:37:37.270 * Looking for test storage... 00:37:37.270 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:37:37.270 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:37:37.270 00:37:37.270 real 0m0.107s 00:37:37.270 user 0m0.041s 00:37:37.270 sys 0m0.070s 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.270 ************************************ 00:37:37.270 END TEST nvmf_host_discovery 00:37:37.270 ************************************ 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:37.270 07:26:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.270 ************************************ 00:37:37.271 START TEST nvmf_host_multipath_status 00:37:37.271 ************************************ 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:37:37.271 * Looking for test storage... 00:37:37.271 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:37.271 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:37.531 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:37.531 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:37.531 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:37:37.531 07:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:45.721 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:45.721 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:37:45.721 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:45.721 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:45.721 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:37:45.722 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:37:45.722 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:37:45.722 Found net devices under 0000:d9:00.0: mlx_0_0 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:37:45.722 Found net devices under 0000:d9:00.1: mlx_0_1 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:37:45.722 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:37:45.723 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:45.723 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:37:45.723 altname enp217s0f0np0 00:37:45.723 altname ens818f0np0 00:37:45.723 inet 192.168.100.8/24 scope global mlx_0_0 00:37:45.723 valid_lft forever preferred_lft forever 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:37:45.723 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:45.723 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:37:45.723 altname enp217s0f1np1 00:37:45.723 altname ens818f1np1 00:37:45.723 inet 192.168.100.9/24 scope global mlx_0_1 00:37:45.723 valid_lft forever preferred_lft forever 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:37:45.723 192.168.100.9' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:37:45.723 192.168.100.9' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:37:45.723 192.168.100.9' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1878830 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1878830 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1878830 ']' 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:45.723 07:26:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:37:45.723 [2024-07-24 07:26:59.433018] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:37:45.723 [2024-07-24 07:26:59.433109] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:45.723 EAL: No free 2048 kB hugepages reported on node 1 00:37:45.723 [2024-07-24 07:26:59.580121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:45.723 [2024-07-24 07:26:59.780903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:45.723 [2024-07-24 07:26:59.780950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:45.723 [2024-07-24 07:26:59.780966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:45.723 [2024-07-24 07:26:59.780977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:45.723 [2024-07-24 07:26:59.780988] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:45.723 [2024-07-24 07:26:59.781076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.723 [2024-07-24 07:26:59.781090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.723 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:45.723 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:37:45.723 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:45.723 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:45.723 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:45.723 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:45.723 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1878830 00:37:45.723 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:37:45.983 [2024-07-24 07:27:00.427067] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f423f683940) succeed. 00:37:45.983 [2024-07-24 07:27:00.436322] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f423f63e940) succeed. 00:37:46.242 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:46.242 Malloc0 00:37:46.502 07:27:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:37:46.502 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:46.761 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:46.761 [2024-07-24 07:27:01.388697] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:37:47.020 [2024-07-24 07:27:01.565023] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1879455 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1879455 /var/tmp/bdevperf.sock 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1879455 ']' 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:47.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:47.020 07:27:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:47.958 07:27:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:47.958 07:27:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:37:47.958 07:27:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:37:48.216 07:27:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:37:48.474 Nvme0n1 00:37:48.474 07:27:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:37:48.732 Nvme0n1 00:37:48.732 07:27:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:37:48.732 07:27:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:37:50.634 07:27:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:37:50.634 07:27:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:37:50.893 07:27:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:37:50.893 07:27:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:52.268 07:27:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:52.527 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:52.527 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:52.527 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:52.527 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:52.785 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:52.785 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:52.785 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:52.785 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:53.043 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:53.043 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:53.043 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:53.043 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:53.043 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:53.043 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:37:53.043 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:37:53.302 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:37:53.561 07:27:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:37:54.498 07:27:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:37:54.498 07:27:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:37:54.498 07:27:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:54.498 07:27:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:54.757 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:54.757 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:54.757 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:54.757 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:54.757 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:54.757 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:54.757 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:54.757 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:55.017 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:55.017 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:55.017 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:55.017 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:55.276 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:55.276 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:55.276 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:55.276 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:55.276 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:55.276 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:55.276 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:55.276 07:27:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:55.535 07:27:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:55.535 07:27:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:37:55.535 07:27:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:37:55.794 07:27:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:37:56.053 07:27:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:37:56.991 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:37:56.992 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:56.992 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:56.992 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:56.992 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:56.992 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:56.992 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:56.992 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:57.250 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:57.250 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:57.250 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:57.250 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:57.509 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:57.509 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:57.509 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:57.509 07:27:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:57.767 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:57.767 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:57.767 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:57.767 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:57.767 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:57.767 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:57.767 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:57.767 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:58.025 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:58.025 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:37:58.025 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:37:58.283 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:37:58.283 07:27:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:37:59.278 07:27:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:37:59.278 07:27:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:59.278 07:27:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:59.278 07:27:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:59.537 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:59.537 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:59.537 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:59.537 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:59.797 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:59.797 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:59.797 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:59.797 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:59.797 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:59.797 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:59.797 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:59.797 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:00.056 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:00.056 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:00.056 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:00.056 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:00.315 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:00.315 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:00.315 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:00.315 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:00.574 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:00.574 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:38:00.574 07:27:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:38:00.574 07:27:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:38:00.832 07:27:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:38:01.768 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:38:01.768 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:01.768 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:01.768 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:02.027 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:02.027 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:02.027 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.027 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:02.027 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:02.027 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:02.027 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.027 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:02.285 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:02.285 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:02.285 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.285 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:02.544 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:02.544 07:27:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:02.544 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.544 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:02.544 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:02.544 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:02.803 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.803 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:02.803 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:02.803 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:38:02.803 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:38:03.062 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:38:03.062 07:27:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:38:04.440 07:27:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:38:04.440 07:27:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:04.440 07:27:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.440 07:27:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:04.440 07:27:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:04.440 07:27:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:04.440 07:27:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:04.440 07:27:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.440 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:04.440 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:04.440 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.440 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:04.699 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:04.699 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:04.699 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.699 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:04.957 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:04.957 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:04.957 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.957 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:05.216 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:05.216 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:05.216 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:05.216 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:05.216 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:05.216 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:38:05.476 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:38:05.476 07:27:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:38:05.734 07:27:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:38:05.734 07:27:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.111 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:07.369 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.369 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:07.369 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.369 07:27:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:07.628 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.628 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:07.628 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.628 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:07.628 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.628 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:07.628 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:07.628 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.887 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.887 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:38:07.887 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:38:08.145 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:38:08.403 07:27:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:38:09.350 07:27:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:38:09.350 07:27:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:09.350 07:27:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.350 07:27:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:09.350 07:27:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:09.350 07:27:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:09.350 07:27:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.350 07:27:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:09.608 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:09.608 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:09.608 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.608 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:09.867 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:09.867 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:09.867 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.867 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:10.126 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:10.126 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:10.126 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:10.126 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:10.126 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:10.126 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:10.126 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:10.126 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:10.384 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:10.384 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:38:10.384 07:27:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:38:10.643 07:27:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:38:10.643 07:27:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:38:12.021 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:38:12.021 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.022 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:12.280 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:12.280 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:12.280 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.280 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:12.539 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:12.539 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:12.539 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:12.539 07:27:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.539 07:27:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:12.539 07:27:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:12.539 07:27:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.539 07:27:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:12.838 07:27:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:12.838 07:27:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:38:12.838 07:27:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:38:13.098 07:27:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:38:13.098 07:27:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:38:14.476 07:27:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:38:14.476 07:27:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:14.476 07:27:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.476 07:27:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:14.476 07:27:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.476 07:27:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:14.476 07:27:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.476 07:27:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:14.476 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:14.476 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:14.476 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.476 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:14.736 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.736 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:14.736 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.736 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:14.995 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.995 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:14.995 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.995 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:14.995 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.995 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:14.995 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.995 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1879455 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1879455 ']' 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1879455 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1879455 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1879455' 00:38:15.254 killing process with pid 1879455 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1879455 00:38:15.254 07:27:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1879455 00:38:15.822 Connection closed with partial response: 00:38:15.822 00:38:15.822 00:38:16.394 07:27:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1879455 00:38:16.394 07:27:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:16.394 [2024-07-24 07:27:01.659328] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:38:16.394 [2024-07-24 07:27:01.659433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1879455 ] 00:38:16.394 EAL: No free 2048 kB hugepages reported on node 1 00:38:16.394 [2024-07-24 07:27:01.803513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.394 [2024-07-24 07:27:02.015130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:16.394 Running I/O for 90 seconds... 00:38:16.394 [2024-07-24 07:27:15.119774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757b000 len:0x1000 key:0x183000 00:38:16.394 [2024-07-24 07:27:15.119825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.119878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007579000 len:0x1000 key:0x183000 00:38:16.394 [2024-07-24 07:27:15.119893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.119912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007577000 len:0x1000 key:0x183000 00:38:16.394 [2024-07-24 07:27:15.119926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.119944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007575000 len:0x1000 key:0x183000 00:38:16.394 [2024-07-24 07:27:15.119957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.119977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007573000 len:0x1000 key:0x183000 00:38:16.394 [2024-07-24 07:27:15.119990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007571000 len:0x1000 key:0x183000 00:38:16.394 [2024-07-24 07:27:15.120026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756f000 len:0x1000 key:0x183000 00:38:16.394 [2024-07-24 07:27:15.120057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:16.394 [2024-07-24 07:27:15.120743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.394 [2024-07-24 07:27:15.120755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.120775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.120787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.120805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.120818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.120835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.120847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.120869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.120882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.120899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.120911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.120929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.120943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.120960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.120974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.120992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.395 [2024-07-24 07:27:15.121757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fd000 len:0x1000 key:0x183000 00:38:16.395 [2024-07-24 07:27:15.121790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fb000 len:0x1000 key:0x183000 00:38:16.395 [2024-07-24 07:27:15.121821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f9000 len:0x1000 key:0x183000 00:38:16.395 [2024-07-24 07:27:15.121851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:16.395 [2024-07-24 07:27:15.121870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cf000 len:0x1000 key:0x183000 00:38:16.395 [2024-07-24 07:27:15.121899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.121918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750f000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.121931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.121949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750d000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.121962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.121980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cd000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.121993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c9000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c7000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c5000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c3000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bd000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bb000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b7000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b3000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x183000 00:38:16.396 [2024-07-24 07:27:15.122503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.122877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.122891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.123309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.123331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:16.396 [2024-07-24 07:27:15.123358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.396 [2024-07-24 07:27:15.123371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.123785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.123803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.123830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.123843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.123869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.123890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.123917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.123931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.123954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.123968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.123992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:15.124905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:15.124920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:27.653601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x183000 00:38:16.397 [2024-07-24 07:27:27.653659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:27.653689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d3000 len:0x1000 key:0x183000 00:38:16.397 [2024-07-24 07:27:27.653703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:27.653726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f1000 len:0x1000 key:0x183000 00:38:16.397 [2024-07-24 07:27:27.653738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:27.653757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:27.653770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:27.653798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.397 [2024-07-24 07:27:27.653811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:27.653834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x183000 00:38:16.397 [2024-07-24 07:27:27.653846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:27.653865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751f000 len:0x1000 key:0x183000 00:38:16.397 [2024-07-24 07:27:27.653878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:16.397 [2024-07-24 07:27:27.653895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.653908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.653926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075eb000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.653938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f3000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.654366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.654397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.654426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.654461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752d000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.654521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007517000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d5000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752b000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.654648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e5000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a9000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753b000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.654889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007525000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c3000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.654951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.654972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.654985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.655003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d1000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.655016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.655033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.655046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.655065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.398 [2024-07-24 07:27:27.655077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.655096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fb000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.655108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.655126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007537000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.655139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.655157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.655169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:16.398 [2024-07-24 07:27:27.655187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x183000 00:38:16.398 [2024-07-24 07:27:27.655208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007589000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007571000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751b000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d9000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075df000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758d000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075db000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756f000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e9000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007545000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e1000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dd000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.655934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.655984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.655996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.656014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cd000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.656027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.656045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.656058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.656076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d3000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.656088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.656106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.656119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.656136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.656149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.656168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.656180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.657992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.658019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.658047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.658059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.658079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007529000 len:0x1000 key:0x183000 00:38:16.399 [2024-07-24 07:27:27.658092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.658109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.658124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.658389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.399 [2024-07-24 07:27:27.658405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:16.399 [2024-07-24 07:27:27.658425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.658438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.658469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.658500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.658531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.658563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007513000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.658594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007541000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.658632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e3000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.658664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.658694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fd000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.658725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bb000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.658757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.658810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007575000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.658843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a3000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.658874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.658907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752f000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.658937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.658968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.658986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750f000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.658998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.659029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.659060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.659095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.659126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.659157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.659187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.659217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007533000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.659248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750d000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.659277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.659311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.659343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f3000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.659531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.659562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.659592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.659624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d5000 len:0x1000 key:0x183000 00:38:16.400 [2024-07-24 07:27:27.659666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.400 [2024-07-24 07:27:27.659697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:16.400 [2024-07-24 07:27:27.659715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e5000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.659728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.659749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.659762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.659779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753b000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.659792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.659810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007525000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.659823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.659840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.659853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.659871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.659883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.659901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fb000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.659914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.659932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.659945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.659962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007589000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.659975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.659995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.660007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.660039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.660067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075df000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.660096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.660125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756f000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.660153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.660181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.660209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e1000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.660236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.660264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dd000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.660293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.660322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.660352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.660383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.660413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.660442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.660459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.660473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.662233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.662274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c9000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.662303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.662331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bd000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007511000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.662389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.662417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.662449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.662476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751f000 len:0x1000 key:0x183000 00:38:16.401 [2024-07-24 07:27:27.662504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:16.401 [2024-07-24 07:27:27.662599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.401 [2024-07-24 07:27:27.662614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007523000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.662650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.662677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.662705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.662739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.662767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.662795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f7000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.662824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.662851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.662882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.662909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.662937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.662964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.662980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.662992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.663020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075db000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.663160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007541000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.663409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bb000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.663465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007575000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.663492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.663578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.663608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.663643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750d000 len:0x1000 key:0x183000 00:38:16.402 [2024-07-24 07:27:27.663700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.402 [2024-07-24 07:27:27.663730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:16.402 [2024-07-24 07:27:27.663745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.403 [2024-07-24 07:27:27.663758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.663774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.403 [2024-07-24 07:27:27.663787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.663802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.403 [2024-07-24 07:27:27.663814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.663831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x183000 00:38:16.403 [2024-07-24 07:27:27.663844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.663859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007525000 len:0x1000 key:0x183000 00:38:16.403 [2024-07-24 07:27:27.663871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.663887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.403 [2024-07-24 07:27:27.663900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.663916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0x183000 00:38:16.403 [2024-07-24 07:27:27.663929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.663946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x183000 00:38:16.403 [2024-07-24 07:27:27.663959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.663975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.403 [2024-07-24 07:27:27.663987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.664003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.403 [2024-07-24 07:27:27.664015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.664032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0x183000 00:38:16.403 [2024-07-24 07:27:27.664044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.664060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e1000 len:0x1000 key:0x183000 00:38:16.403 [2024-07-24 07:27:27.664074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.664091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dd000 len:0x1000 key:0x183000 00:38:16.403 [2024-07-24 07:27:27.664103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:16.403 [2024-07-24 07:27:27.664119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.664135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.664151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.664164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.664179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.664192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.665877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.665902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.665933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.665947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.665962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007511000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.665980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.665996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751f000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.666340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.666370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.666399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075eb000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.666455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f5000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d7000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.666568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e7000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.666670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d3000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.666755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758d000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e9000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d9000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.666866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fd000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.666894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.666924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.666939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.674128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.674147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750f000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.674160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.674176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007533000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.674189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.674204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f3000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.674217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.674232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.404 [2024-07-24 07:27:27.674245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:16.404 [2024-07-24 07:27:27.674261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753b000 len:0x1000 key:0x183000 00:38:16.404 [2024-07-24 07:27:27.674273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:16.405 [2024-07-24 07:27:27.674289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.405 [2024-07-24 07:27:27.674303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:16.405 [2024-07-24 07:27:27.674318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.405 [2024-07-24 07:27:27.674331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:16.405 [2024-07-24 07:27:27.674346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756f000 len:0x1000 key:0x183000 00:38:16.405 [2024-07-24 07:27:27.674359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:16.405 [2024-07-24 07:27:27.674374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x183000 00:38:16.405 [2024-07-24 07:27:27.674387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:16.405 [2024-07-24 07:27:27.674403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fb000 len:0x1000 key:0x183000 00:38:16.405 [2024-07-24 07:27:27.674416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:16.405 Received shutdown signal, test time was about 26.530739 seconds 00:38:16.405 00:38:16.405 Latency(us) 00:38:16.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.405 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:16.405 Verification LBA range: start 0x0 length 0x4000 00:38:16.405 Nvme0n1 : 26.53 14124.42 55.17 0.00 0.00 9040.75 976.49 3019898.88 00:38:16.405 =================================================================================================================== 00:38:16.405 Total : 14124.42 55.17 0.00 0.00 9040.75 976.49 3019898.88 00:38:16.405 07:27:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:38:16.664 rmmod nvme_rdma 00:38:16.664 rmmod nvme_fabrics 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1878830 ']' 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1878830 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1878830 ']' 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1878830 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1878830 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1878830' 00:38:16.664 killing process with pid 1878830 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1878830 00:38:16.664 07:27:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1878830 00:38:18.569 07:27:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:18.569 07:27:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:38:18.569 00:38:18.569 real 0m41.130s 00:38:18.569 user 1m50.945s 00:38:18.569 sys 0m10.353s 00:38:18.569 07:27:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:18.569 07:27:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:18.569 ************************************ 00:38:18.569 END TEST nvmf_host_multipath_status 00:38:18.569 ************************************ 00:38:18.569 07:27:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:38:18.569 07:27:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:18.569 07:27:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:18.569 07:27:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.569 ************************************ 00:38:18.569 START TEST nvmf_discovery_remove_ifc 00:38:18.569 ************************************ 00:38:18.569 07:27:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:38:18.569 * Looking for test storage... 00:38:18.569 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:38:18.569 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:38:18.569 00:38:18.569 real 0m0.136s 00:38:18.569 user 0m0.062s 00:38:18.569 sys 0m0.085s 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:18.569 ************************************ 00:38:18.569 END TEST nvmf_discovery_remove_ifc 00:38:18.569 ************************************ 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.569 ************************************ 00:38:18.569 START TEST nvmf_identify_kernel_target 00:38:18.569 ************************************ 00:38:18.569 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:38:18.828 * Looking for test storage... 00:38:18.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:18.828 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:38:18.829 07:27:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:38:26.948 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:26.948 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:38:26.948 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:26.948 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:26.948 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:26.948 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:38:26.949 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:38:26.949 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:38:26.949 Found net devices under 0000:d9:00.0: mlx_0_0 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:38:26.949 Found net devices under 0000:d9:00.1: mlx_0_1 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:38:26.949 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:38:26.950 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:26.950 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:38:26.950 altname enp217s0f0np0 00:38:26.950 altname ens818f0np0 00:38:26.950 inet 192.168.100.8/24 scope global mlx_0_0 00:38:26.950 valid_lft forever preferred_lft forever 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:38:26.950 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:26.950 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:38:26.950 altname enp217s0f1np1 00:38:26.950 altname ens818f1np1 00:38:26.950 inet 192.168.100.9/24 scope global mlx_0_1 00:38:26.950 valid_lft forever preferred_lft forever 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:38:26.950 192.168.100.9' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:38:26.950 192.168.100.9' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:38:26.950 192.168.100.9' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:26.950 07:27:41 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:38:30.236 Waiting for block devices as requested 00:38:30.236 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:30.236 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:30.236 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:30.236 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:30.236 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:30.236 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:30.494 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:30.494 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:30.494 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:30.753 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:30.753 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:30.753 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:31.013 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:31.013 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:31.013 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:31.272 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:31.272 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:31.272 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:31.273 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:31.273 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:31.273 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:38:31.273 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:31.273 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:38:31.273 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:31.273 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:31.273 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:31.532 No valid GPT data, bailing 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:31.532 07:27:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:38:31.532 00:38:31.532 Discovery Log Number of Records 2, Generation counter 2 00:38:31.532 =====Discovery Log Entry 0====== 00:38:31.532 trtype: rdma 00:38:31.532 adrfam: ipv4 00:38:31.532 subtype: current discovery subsystem 00:38:31.532 treq: not specified, sq flow control disable supported 00:38:31.532 portid: 1 00:38:31.532 trsvcid: 4420 00:38:31.532 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:31.532 traddr: 192.168.100.8 00:38:31.532 eflags: none 00:38:31.532 rdma_prtype: not specified 00:38:31.532 rdma_qptype: connected 00:38:31.532 rdma_cms: rdma-cm 00:38:31.532 rdma_pkey: 0x0000 00:38:31.532 =====Discovery Log Entry 1====== 00:38:31.532 trtype: rdma 00:38:31.532 adrfam: ipv4 00:38:31.532 subtype: nvme subsystem 00:38:31.532 treq: not specified, sq flow control disable supported 00:38:31.532 portid: 1 00:38:31.532 trsvcid: 4420 00:38:31.532 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:31.532 traddr: 192.168.100.8 00:38:31.532 eflags: none 00:38:31.532 rdma_prtype: not specified 00:38:31.532 rdma_qptype: connected 00:38:31.532 rdma_cms: rdma-cm 00:38:31.532 rdma_pkey: 0x0000 00:38:31.532 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:38:31.532 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:38:31.791 EAL: No free 2048 kB hugepages reported on node 1 00:38:31.791 ===================================================== 00:38:31.792 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:38:31.792 ===================================================== 00:38:31.792 Controller Capabilities/Features 00:38:31.792 ================================ 00:38:31.792 Vendor ID: 0000 00:38:31.792 Subsystem Vendor ID: 0000 00:38:31.792 Serial Number: 3db39828df65a3aa9732 00:38:31.792 Model Number: Linux 00:38:31.792 Firmware Version: 6.7.0-68 00:38:31.792 Recommended Arb Burst: 0 00:38:31.792 IEEE OUI Identifier: 00 00 00 00:38:31.792 Multi-path I/O 00:38:31.792 May have multiple subsystem ports: No 00:38:31.792 May have multiple controllers: No 00:38:31.792 Associated with SR-IOV VF: No 00:38:31.792 Max Data Transfer Size: Unlimited 00:38:31.792 Max Number of Namespaces: 0 00:38:31.792 Max Number of I/O Queues: 1024 00:38:31.792 NVMe Specification Version (VS): 1.3 00:38:31.792 NVMe Specification Version (Identify): 1.3 00:38:31.792 Maximum Queue Entries: 128 00:38:31.792 Contiguous Queues Required: No 00:38:31.792 Arbitration Mechanisms Supported 00:38:31.792 Weighted Round Robin: Not Supported 00:38:31.792 Vendor Specific: Not Supported 00:38:31.792 Reset Timeout: 7500 ms 00:38:31.792 Doorbell Stride: 4 bytes 00:38:31.792 NVM Subsystem Reset: Not Supported 00:38:31.792 Command Sets Supported 00:38:31.792 NVM Command Set: Supported 00:38:31.792 Boot Partition: Not Supported 00:38:31.792 Memory Page Size Minimum: 4096 bytes 00:38:31.792 Memory Page Size Maximum: 4096 bytes 00:38:31.792 Persistent Memory Region: Not Supported 00:38:31.792 Optional Asynchronous Events Supported 00:38:31.792 Namespace Attribute Notices: Not Supported 00:38:31.792 Firmware Activation Notices: Not Supported 00:38:31.792 ANA Change Notices: Not Supported 00:38:31.792 PLE Aggregate Log Change Notices: Not Supported 00:38:31.792 LBA Status Info Alert Notices: Not Supported 00:38:31.792 EGE Aggregate Log Change Notices: Not Supported 00:38:31.792 Normal NVM Subsystem Shutdown event: Not Supported 00:38:31.792 Zone Descriptor Change Notices: Not Supported 00:38:31.792 Discovery Log Change Notices: Supported 00:38:31.792 Controller Attributes 00:38:31.792 128-bit Host Identifier: Not Supported 00:38:31.792 Non-Operational Permissive Mode: Not Supported 00:38:31.792 NVM Sets: Not Supported 00:38:31.792 Read Recovery Levels: Not Supported 00:38:31.792 Endurance Groups: Not Supported 00:38:31.792 Predictable Latency Mode: Not Supported 00:38:31.792 Traffic Based Keep ALive: Not Supported 00:38:31.792 Namespace Granularity: Not Supported 00:38:31.792 SQ Associations: Not Supported 00:38:31.792 UUID List: Not Supported 00:38:31.792 Multi-Domain Subsystem: Not Supported 00:38:31.792 Fixed Capacity Management: Not Supported 00:38:31.792 Variable Capacity Management: Not Supported 00:38:31.792 Delete Endurance Group: Not Supported 00:38:31.792 Delete NVM Set: Not Supported 00:38:31.792 Extended LBA Formats Supported: Not Supported 00:38:31.792 Flexible Data Placement Supported: Not Supported 00:38:31.792 00:38:31.792 Controller Memory Buffer Support 00:38:31.792 ================================ 00:38:31.792 Supported: No 00:38:31.792 00:38:31.792 Persistent Memory Region Support 00:38:31.792 ================================ 00:38:31.792 Supported: No 00:38:31.792 00:38:31.792 Admin Command Set Attributes 00:38:31.792 ============================ 00:38:31.792 Security Send/Receive: Not Supported 00:38:31.792 Format NVM: Not Supported 00:38:31.792 Firmware Activate/Download: Not Supported 00:38:31.792 Namespace Management: Not Supported 00:38:31.792 Device Self-Test: Not Supported 00:38:31.792 Directives: Not Supported 00:38:31.792 NVMe-MI: Not Supported 00:38:31.792 Virtualization Management: Not Supported 00:38:31.792 Doorbell Buffer Config: Not Supported 00:38:31.792 Get LBA Status Capability: Not Supported 00:38:31.792 Command & Feature Lockdown Capability: Not Supported 00:38:31.792 Abort Command Limit: 1 00:38:31.792 Async Event Request Limit: 1 00:38:31.792 Number of Firmware Slots: N/A 00:38:31.792 Firmware Slot 1 Read-Only: N/A 00:38:31.792 Firmware Activation Without Reset: N/A 00:38:31.792 Multiple Update Detection Support: N/A 00:38:31.792 Firmware Update Granularity: No Information Provided 00:38:31.792 Per-Namespace SMART Log: No 00:38:31.792 Asymmetric Namespace Access Log Page: Not Supported 00:38:31.792 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:38:31.792 Command Effects Log Page: Not Supported 00:38:31.792 Get Log Page Extended Data: Supported 00:38:31.792 Telemetry Log Pages: Not Supported 00:38:31.792 Persistent Event Log Pages: Not Supported 00:38:31.792 Supported Log Pages Log Page: May Support 00:38:31.792 Commands Supported & Effects Log Page: Not Supported 00:38:31.792 Feature Identifiers & Effects Log Page:May Support 00:38:31.792 NVMe-MI Commands & Effects Log Page: May Support 00:38:31.792 Data Area 4 for Telemetry Log: Not Supported 00:38:31.792 Error Log Page Entries Supported: 1 00:38:31.792 Keep Alive: Not Supported 00:38:31.792 00:38:31.792 NVM Command Set Attributes 00:38:31.792 ========================== 00:38:31.792 Submission Queue Entry Size 00:38:31.792 Max: 1 00:38:31.792 Min: 1 00:38:31.792 Completion Queue Entry Size 00:38:31.792 Max: 1 00:38:31.792 Min: 1 00:38:31.792 Number of Namespaces: 0 00:38:31.792 Compare Command: Not Supported 00:38:31.792 Write Uncorrectable Command: Not Supported 00:38:31.792 Dataset Management Command: Not Supported 00:38:31.792 Write Zeroes Command: Not Supported 00:38:31.792 Set Features Save Field: Not Supported 00:38:31.792 Reservations: Not Supported 00:38:31.792 Timestamp: Not Supported 00:38:31.792 Copy: Not Supported 00:38:31.792 Volatile Write Cache: Not Present 00:38:31.792 Atomic Write Unit (Normal): 1 00:38:31.792 Atomic Write Unit (PFail): 1 00:38:31.792 Atomic Compare & Write Unit: 1 00:38:31.792 Fused Compare & Write: Not Supported 00:38:31.792 Scatter-Gather List 00:38:31.792 SGL Command Set: Supported 00:38:31.792 SGL Keyed: Supported 00:38:31.792 SGL Bit Bucket Descriptor: Not Supported 00:38:31.792 SGL Metadata Pointer: Not Supported 00:38:31.792 Oversized SGL: Not Supported 00:38:31.792 SGL Metadata Address: Not Supported 00:38:31.792 SGL Offset: Supported 00:38:31.792 Transport SGL Data Block: Not Supported 00:38:31.792 Replay Protected Memory Block: Not Supported 00:38:31.792 00:38:31.792 Firmware Slot Information 00:38:31.792 ========================= 00:38:31.792 Active slot: 0 00:38:31.792 00:38:31.792 00:38:31.792 Error Log 00:38:31.792 ========= 00:38:31.792 00:38:31.792 Active Namespaces 00:38:31.792 ================= 00:38:31.792 Discovery Log Page 00:38:31.792 ================== 00:38:31.792 Generation Counter: 2 00:38:31.792 Number of Records: 2 00:38:31.792 Record Format: 0 00:38:31.792 00:38:31.792 Discovery Log Entry 0 00:38:31.792 ---------------------- 00:38:31.792 Transport Type: 1 (RDMA) 00:38:31.792 Address Family: 1 (IPv4) 00:38:31.792 Subsystem Type: 3 (Current Discovery Subsystem) 00:38:31.792 Entry Flags: 00:38:31.792 Duplicate Returned Information: 0 00:38:31.792 Explicit Persistent Connection Support for Discovery: 0 00:38:31.792 Transport Requirements: 00:38:31.792 Secure Channel: Not Specified 00:38:31.792 Port ID: 1 (0x0001) 00:38:31.792 Controller ID: 65535 (0xffff) 00:38:31.792 Admin Max SQ Size: 32 00:38:31.792 Transport Service Identifier: 4420 00:38:31.792 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:38:31.792 Transport Address: 192.168.100.8 00:38:31.792 Transport Specific Address Subtype - RDMA 00:38:31.792 RDMA QP Service Type: 1 (Reliable Connected) 00:38:31.792 RDMA Provider Type: 1 (No provider specified) 00:38:31.792 RDMA CM Service: 1 (RDMA_CM) 00:38:31.792 Discovery Log Entry 1 00:38:31.792 ---------------------- 00:38:31.792 Transport Type: 1 (RDMA) 00:38:31.792 Address Family: 1 (IPv4) 00:38:31.792 Subsystem Type: 2 (NVM Subsystem) 00:38:31.792 Entry Flags: 00:38:31.792 Duplicate Returned Information: 0 00:38:31.792 Explicit Persistent Connection Support for Discovery: 0 00:38:31.792 Transport Requirements: 00:38:31.792 Secure Channel: Not Specified 00:38:31.792 Port ID: 1 (0x0001) 00:38:31.792 Controller ID: 65535 (0xffff) 00:38:31.792 Admin Max SQ Size: 32 00:38:31.792 Transport Service Identifier: 4420 00:38:31.792 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:38:31.792 Transport Address: 192.168.100.8 00:38:31.792 Transport Specific Address Subtype - RDMA 00:38:31.793 RDMA QP Service Type: 1 (Reliable Connected) 00:38:31.793 RDMA Provider Type: 1 (No provider specified) 00:38:31.793 RDMA CM Service: 1 (RDMA_CM) 00:38:31.793 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:32.052 EAL: No free 2048 kB hugepages reported on node 1 00:38:32.052 get_feature(0x01) failed 00:38:32.052 get_feature(0x02) failed 00:38:32.052 get_feature(0x04) failed 00:38:32.052 ===================================================== 00:38:32.052 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:38:32.052 ===================================================== 00:38:32.052 Controller Capabilities/Features 00:38:32.052 ================================ 00:38:32.052 Vendor ID: 0000 00:38:32.052 Subsystem Vendor ID: 0000 00:38:32.052 Serial Number: 3448e3b372142a21ce57 00:38:32.052 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:38:32.052 Firmware Version: 6.7.0-68 00:38:32.052 Recommended Arb Burst: 6 00:38:32.052 IEEE OUI Identifier: 00 00 00 00:38:32.052 Multi-path I/O 00:38:32.052 May have multiple subsystem ports: Yes 00:38:32.052 May have multiple controllers: Yes 00:38:32.052 Associated with SR-IOV VF: No 00:38:32.052 Max Data Transfer Size: 1048576 00:38:32.052 Max Number of Namespaces: 1024 00:38:32.052 Max Number of I/O Queues: 128 00:38:32.052 NVMe Specification Version (VS): 1.3 00:38:32.052 NVMe Specification Version (Identify): 1.3 00:38:32.052 Maximum Queue Entries: 128 00:38:32.052 Contiguous Queues Required: No 00:38:32.052 Arbitration Mechanisms Supported 00:38:32.052 Weighted Round Robin: Not Supported 00:38:32.052 Vendor Specific: Not Supported 00:38:32.052 Reset Timeout: 7500 ms 00:38:32.052 Doorbell Stride: 4 bytes 00:38:32.052 NVM Subsystem Reset: Not Supported 00:38:32.052 Command Sets Supported 00:38:32.052 NVM Command Set: Supported 00:38:32.052 Boot Partition: Not Supported 00:38:32.052 Memory Page Size Minimum: 4096 bytes 00:38:32.052 Memory Page Size Maximum: 4096 bytes 00:38:32.052 Persistent Memory Region: Not Supported 00:38:32.052 Optional Asynchronous Events Supported 00:38:32.052 Namespace Attribute Notices: Supported 00:38:32.052 Firmware Activation Notices: Not Supported 00:38:32.052 ANA Change Notices: Supported 00:38:32.052 PLE Aggregate Log Change Notices: Not Supported 00:38:32.052 LBA Status Info Alert Notices: Not Supported 00:38:32.052 EGE Aggregate Log Change Notices: Not Supported 00:38:32.052 Normal NVM Subsystem Shutdown event: Not Supported 00:38:32.052 Zone Descriptor Change Notices: Not Supported 00:38:32.052 Discovery Log Change Notices: Not Supported 00:38:32.052 Controller Attributes 00:38:32.052 128-bit Host Identifier: Supported 00:38:32.052 Non-Operational Permissive Mode: Not Supported 00:38:32.052 NVM Sets: Not Supported 00:38:32.052 Read Recovery Levels: Not Supported 00:38:32.052 Endurance Groups: Not Supported 00:38:32.052 Predictable Latency Mode: Not Supported 00:38:32.052 Traffic Based Keep ALive: Supported 00:38:32.052 Namespace Granularity: Not Supported 00:38:32.052 SQ Associations: Not Supported 00:38:32.052 UUID List: Not Supported 00:38:32.052 Multi-Domain Subsystem: Not Supported 00:38:32.052 Fixed Capacity Management: Not Supported 00:38:32.052 Variable Capacity Management: Not Supported 00:38:32.052 Delete Endurance Group: Not Supported 00:38:32.052 Delete NVM Set: Not Supported 00:38:32.052 Extended LBA Formats Supported: Not Supported 00:38:32.052 Flexible Data Placement Supported: Not Supported 00:38:32.052 00:38:32.052 Controller Memory Buffer Support 00:38:32.052 ================================ 00:38:32.052 Supported: No 00:38:32.052 00:38:32.052 Persistent Memory Region Support 00:38:32.052 ================================ 00:38:32.052 Supported: No 00:38:32.052 00:38:32.052 Admin Command Set Attributes 00:38:32.052 ============================ 00:38:32.052 Security Send/Receive: Not Supported 00:38:32.052 Format NVM: Not Supported 00:38:32.052 Firmware Activate/Download: Not Supported 00:38:32.052 Namespace Management: Not Supported 00:38:32.052 Device Self-Test: Not Supported 00:38:32.052 Directives: Not Supported 00:38:32.052 NVMe-MI: Not Supported 00:38:32.052 Virtualization Management: Not Supported 00:38:32.052 Doorbell Buffer Config: Not Supported 00:38:32.052 Get LBA Status Capability: Not Supported 00:38:32.052 Command & Feature Lockdown Capability: Not Supported 00:38:32.052 Abort Command Limit: 4 00:38:32.052 Async Event Request Limit: 4 00:38:32.052 Number of Firmware Slots: N/A 00:38:32.052 Firmware Slot 1 Read-Only: N/A 00:38:32.052 Firmware Activation Without Reset: N/A 00:38:32.052 Multiple Update Detection Support: N/A 00:38:32.052 Firmware Update Granularity: No Information Provided 00:38:32.052 Per-Namespace SMART Log: Yes 00:38:32.052 Asymmetric Namespace Access Log Page: Supported 00:38:32.052 ANA Transition Time : 10 sec 00:38:32.052 00:38:32.052 Asymmetric Namespace Access Capabilities 00:38:32.052 ANA Optimized State : Supported 00:38:32.052 ANA Non-Optimized State : Supported 00:38:32.052 ANA Inaccessible State : Supported 00:38:32.052 ANA Persistent Loss State : Supported 00:38:32.052 ANA Change State : Supported 00:38:32.052 ANAGRPID is not changed : No 00:38:32.052 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:38:32.052 00:38:32.052 ANA Group Identifier Maximum : 128 00:38:32.052 Number of ANA Group Identifiers : 128 00:38:32.052 Max Number of Allowed Namespaces : 1024 00:38:32.052 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:38:32.052 Command Effects Log Page: Supported 00:38:32.052 Get Log Page Extended Data: Supported 00:38:32.052 Telemetry Log Pages: Not Supported 00:38:32.052 Persistent Event Log Pages: Not Supported 00:38:32.052 Supported Log Pages Log Page: May Support 00:38:32.052 Commands Supported & Effects Log Page: Not Supported 00:38:32.052 Feature Identifiers & Effects Log Page:May Support 00:38:32.052 NVMe-MI Commands & Effects Log Page: May Support 00:38:32.052 Data Area 4 for Telemetry Log: Not Supported 00:38:32.052 Error Log Page Entries Supported: 128 00:38:32.052 Keep Alive: Supported 00:38:32.052 Keep Alive Granularity: 1000 ms 00:38:32.052 00:38:32.052 NVM Command Set Attributes 00:38:32.052 ========================== 00:38:32.052 Submission Queue Entry Size 00:38:32.052 Max: 64 00:38:32.052 Min: 64 00:38:32.052 Completion Queue Entry Size 00:38:32.052 Max: 16 00:38:32.052 Min: 16 00:38:32.052 Number of Namespaces: 1024 00:38:32.052 Compare Command: Not Supported 00:38:32.052 Write Uncorrectable Command: Not Supported 00:38:32.052 Dataset Management Command: Supported 00:38:32.052 Write Zeroes Command: Supported 00:38:32.052 Set Features Save Field: Not Supported 00:38:32.052 Reservations: Not Supported 00:38:32.052 Timestamp: Not Supported 00:38:32.052 Copy: Not Supported 00:38:32.052 Volatile Write Cache: Present 00:38:32.052 Atomic Write Unit (Normal): 1 00:38:32.052 Atomic Write Unit (PFail): 1 00:38:32.052 Atomic Compare & Write Unit: 1 00:38:32.052 Fused Compare & Write: Not Supported 00:38:32.052 Scatter-Gather List 00:38:32.052 SGL Command Set: Supported 00:38:32.052 SGL Keyed: Supported 00:38:32.052 SGL Bit Bucket Descriptor: Not Supported 00:38:32.052 SGL Metadata Pointer: Not Supported 00:38:32.052 Oversized SGL: Not Supported 00:38:32.052 SGL Metadata Address: Not Supported 00:38:32.052 SGL Offset: Supported 00:38:32.052 Transport SGL Data Block: Not Supported 00:38:32.052 Replay Protected Memory Block: Not Supported 00:38:32.052 00:38:32.052 Firmware Slot Information 00:38:32.052 ========================= 00:38:32.052 Active slot: 0 00:38:32.052 00:38:32.052 Asymmetric Namespace Access 00:38:32.052 =========================== 00:38:32.052 Change Count : 0 00:38:32.052 Number of ANA Group Descriptors : 1 00:38:32.052 ANA Group Descriptor : 0 00:38:32.052 ANA Group ID : 1 00:38:32.052 Number of NSID Values : 1 00:38:32.052 Change Count : 0 00:38:32.052 ANA State : 1 00:38:32.052 Namespace Identifier : 1 00:38:32.052 00:38:32.052 Commands Supported and Effects 00:38:32.052 ============================== 00:38:32.052 Admin Commands 00:38:32.052 -------------- 00:38:32.052 Get Log Page (02h): Supported 00:38:32.052 Identify (06h): Supported 00:38:32.052 Abort (08h): Supported 00:38:32.052 Set Features (09h): Supported 00:38:32.052 Get Features (0Ah): Supported 00:38:32.052 Asynchronous Event Request (0Ch): Supported 00:38:32.052 Keep Alive (18h): Supported 00:38:32.052 I/O Commands 00:38:32.052 ------------ 00:38:32.052 Flush (00h): Supported 00:38:32.052 Write (01h): Supported LBA-Change 00:38:32.052 Read (02h): Supported 00:38:32.052 Write Zeroes (08h): Supported LBA-Change 00:38:32.052 Dataset Management (09h): Supported 00:38:32.052 00:38:32.052 Error Log 00:38:32.052 ========= 00:38:32.052 Entry: 0 00:38:32.052 Error Count: 0x3 00:38:32.052 Submission Queue Id: 0x0 00:38:32.052 Command Id: 0x5 00:38:32.052 Phase Bit: 0 00:38:32.052 Status Code: 0x2 00:38:32.052 Status Code Type: 0x0 00:38:32.052 Do Not Retry: 1 00:38:32.052 Error Location: 0x28 00:38:32.052 LBA: 0x0 00:38:32.052 Namespace: 0x0 00:38:32.052 Vendor Log Page: 0x0 00:38:32.052 ----------- 00:38:32.052 Entry: 1 00:38:32.052 Error Count: 0x2 00:38:32.052 Submission Queue Id: 0x0 00:38:32.052 Command Id: 0x5 00:38:32.052 Phase Bit: 0 00:38:32.052 Status Code: 0x2 00:38:32.052 Status Code Type: 0x0 00:38:32.052 Do Not Retry: 1 00:38:32.052 Error Location: 0x28 00:38:32.052 LBA: 0x0 00:38:32.052 Namespace: 0x0 00:38:32.052 Vendor Log Page: 0x0 00:38:32.052 ----------- 00:38:32.052 Entry: 2 00:38:32.052 Error Count: 0x1 00:38:32.052 Submission Queue Id: 0x0 00:38:32.052 Command Id: 0x0 00:38:32.052 Phase Bit: 0 00:38:32.052 Status Code: 0x2 00:38:32.052 Status Code Type: 0x0 00:38:32.052 Do Not Retry: 1 00:38:32.052 Error Location: 0x28 00:38:32.052 LBA: 0x0 00:38:32.052 Namespace: 0x0 00:38:32.052 Vendor Log Page: 0x0 00:38:32.052 00:38:32.052 Number of Queues 00:38:32.052 ================ 00:38:32.052 Number of I/O Submission Queues: 128 00:38:32.052 Number of I/O Completion Queues: 128 00:38:32.052 00:38:32.052 ZNS Specific Controller Data 00:38:32.052 ============================ 00:38:32.052 Zone Append Size Limit: 0 00:38:32.052 00:38:32.052 00:38:32.052 Active Namespaces 00:38:32.052 ================= 00:38:32.052 get_feature(0x05) failed 00:38:32.052 Namespace ID:1 00:38:32.052 Command Set Identifier: NVM (00h) 00:38:32.052 Deallocate: Supported 00:38:32.052 Deallocated/Unwritten Error: Not Supported 00:38:32.052 Deallocated Read Value: Unknown 00:38:32.052 Deallocate in Write Zeroes: Not Supported 00:38:32.052 Deallocated Guard Field: 0xFFFF 00:38:32.052 Flush: Supported 00:38:32.052 Reservation: Not Supported 00:38:32.052 Namespace Sharing Capabilities: Multiple Controllers 00:38:32.052 Size (in LBAs): 3907029168 (1863GiB) 00:38:32.052 Capacity (in LBAs): 3907029168 (1863GiB) 00:38:32.052 Utilization (in LBAs): 3907029168 (1863GiB) 00:38:32.052 UUID: f623fa5f-2cfc-4896-a610-787fca0f6e51 00:38:32.052 Thin Provisioning: Not Supported 00:38:32.052 Per-NS Atomic Units: Yes 00:38:32.052 Atomic Boundary Size (Normal): 0 00:38:32.052 Atomic Boundary Size (PFail): 0 00:38:32.052 Atomic Boundary Offset: 0 00:38:32.052 NGUID/EUI64 Never Reused: No 00:38:32.052 ANA group ID: 1 00:38:32.052 Namespace Write Protected: No 00:38:32.052 Number of LBA Formats: 1 00:38:32.052 Current LBA Format: LBA Format #00 00:38:32.052 LBA Format #00: Data Size: 512 Metadata Size: 0 00:38:32.052 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:38:32.052 rmmod nvme_rdma 00:38:32.052 rmmod nvme_fabrics 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:32.052 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:38:32.053 07:27:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:38:35.416 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:35.416 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:35.416 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:35.416 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:35.675 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:37.582 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:38:37.582 00:38:37.582 real 0m18.936s 00:38:37.582 user 0m4.789s 00:38:37.582 sys 0m11.403s 00:38:37.582 07:27:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:37.582 07:27:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:38:37.582 ************************************ 00:38:37.582 END TEST nvmf_identify_kernel_target 00:38:37.582 ************************************ 00:38:37.582 07:27:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:38:37.582 07:27:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:37.582 07:27:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:37.582 07:27:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.582 ************************************ 00:38:37.582 START TEST nvmf_auth_host 00:38:37.582 ************************************ 00:38:37.582 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:38:37.841 * Looking for test storage... 00:38:37.841 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.841 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:38:37.842 07:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:45.963 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:38:45.964 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:38:45.964 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:38:45.964 Found net devices under 0000:d9:00.0: mlx_0_0 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:38:45.964 Found net devices under 0000:d9:00.1: mlx_0_1 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:38:45.964 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:45.964 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:38:45.964 altname enp217s0f0np0 00:38:45.964 altname ens818f0np0 00:38:45.964 inet 192.168.100.8/24 scope global mlx_0_0 00:38:45.964 valid_lft forever preferred_lft forever 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:38:45.964 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:45.964 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:38:45.964 altname enp217s0f1np1 00:38:45.964 altname ens818f1np1 00:38:45.964 inet 192.168.100.9/24 scope global mlx_0_1 00:38:45.964 valid_lft forever preferred_lft forever 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:45.964 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:38:45.965 192.168.100.9' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:38:45.965 192.168.100.9' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:38:45.965 192.168.100.9' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1896713 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1896713 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1896713 ']' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:45.965 07:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7308296a3902d0e5c863016ed381995f 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cch 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7308296a3902d0e5c863016ed381995f 0 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7308296a3902d0e5c863016ed381995f 0 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7308296a3902d0e5c863016ed381995f 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:38:46.903 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cch 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cch 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.cch 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f39b0d7c1ab8a4d262fb6148043e327666200960ff1c6417c999b29ffe0dc5c5 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.QMO 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f39b0d7c1ab8a4d262fb6148043e327666200960ff1c6417c999b29ffe0dc5c5 3 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f39b0d7c1ab8a4d262fb6148043e327666200960ff1c6417c999b29ffe0dc5c5 3 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f39b0d7c1ab8a4d262fb6148043e327666200960ff1c6417c999b29ffe0dc5c5 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.QMO 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.QMO 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.QMO 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1088df2dd3de030052e59a6d6ca812ed08df3b6f055b2aa2 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.A2O 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1088df2dd3de030052e59a6d6ca812ed08df3b6f055b2aa2 0 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1088df2dd3de030052e59a6d6ca812ed08df3b6f055b2aa2 0 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1088df2dd3de030052e59a6d6ca812ed08df3b6f055b2aa2 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.A2O 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.A2O 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.A2O 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aae0ed424b6bc0198ae5ee77426e2ac42cdc988916c3cc12 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oYb 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aae0ed424b6bc0198ae5ee77426e2ac42cdc988916c3cc12 2 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aae0ed424b6bc0198ae5ee77426e2ac42cdc988916c3cc12 2 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aae0ed424b6bc0198ae5ee77426e2ac42cdc988916c3cc12 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oYb 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oYb 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.oYb 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:47.163 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=87a9192a0e5177bfec439877b3042e1f 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SKq 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 87a9192a0e5177bfec439877b3042e1f 1 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 87a9192a0e5177bfec439877b3042e1f 1 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=87a9192a0e5177bfec439877b3042e1f 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:38:47.164 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SKq 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SKq 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.SKq 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5015a8f6acee4c535a9d02d4418d2f84 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uaG 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5015a8f6acee4c535a9d02d4418d2f84 1 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5015a8f6acee4c535a9d02d4418d2f84 1 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5015a8f6acee4c535a9d02d4418d2f84 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uaG 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uaG 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uaG 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:47.423 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6eb1ce2a859a297cae072de477d7256c66569ee8b7b8d3c8 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5eZ 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6eb1ce2a859a297cae072de477d7256c66569ee8b7b8d3c8 2 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6eb1ce2a859a297cae072de477d7256c66569ee8b7b8d3c8 2 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6eb1ce2a859a297cae072de477d7256c66569ee8b7b8d3c8 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5eZ 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5eZ 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5eZ 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f60b2c067ccfb0712df9187b603a8c7c 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.61o 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f60b2c067ccfb0712df9187b603a8c7c 0 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f60b2c067ccfb0712df9187b603a8c7c 0 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f60b2c067ccfb0712df9187b603a8c7c 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.61o 00:38:47.424 07:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.61o 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.61o 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fbe5c3f368e6a5c268608d2286a705ccf7d64e614938d30317f7ad49d84778c4 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zPP 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fbe5c3f368e6a5c268608d2286a705ccf7d64e614938d30317f7ad49d84778c4 3 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fbe5c3f368e6a5c268608d2286a705ccf7d64e614938d30317f7ad49d84778c4 3 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fbe5c3f368e6a5c268608d2286a705ccf7d64e614938d30317f7ad49d84778c4 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:38:47.424 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:47.683 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zPP 00:38:47.683 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zPP 00:38:47.683 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zPP 00:38:47.683 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:38:47.683 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1896713 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1896713 ']' 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cch 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.QMO ]] 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QMO 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.A2O 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.oYb ]] 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oYb 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.SKq 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uaG ]] 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uaG 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5eZ 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.684 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.61o ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.61o 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zPP 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:47.943 07:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:38:52.138 Waiting for block devices as requested 00:38:52.138 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:52.138 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:52.138 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:52.138 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:52.138 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:52.397 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:52.397 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:52.397 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:52.397 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:52.656 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:52.656 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:52.656 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:52.914 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:52.914 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:52.914 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:53.173 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:53.173 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:54.109 No valid GPT data, bailing 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:54.109 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:38:54.110 00:38:54.110 Discovery Log Number of Records 2, Generation counter 2 00:38:54.110 =====Discovery Log Entry 0====== 00:38:54.110 trtype: rdma 00:38:54.110 adrfam: ipv4 00:38:54.110 subtype: current discovery subsystem 00:38:54.110 treq: not specified, sq flow control disable supported 00:38:54.110 portid: 1 00:38:54.110 trsvcid: 4420 00:38:54.110 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:54.110 traddr: 192.168.100.8 00:38:54.110 eflags: none 00:38:54.110 rdma_prtype: not specified 00:38:54.110 rdma_qptype: connected 00:38:54.110 rdma_cms: rdma-cm 00:38:54.110 rdma_pkey: 0x0000 00:38:54.110 =====Discovery Log Entry 1====== 00:38:54.110 trtype: rdma 00:38:54.110 adrfam: ipv4 00:38:54.110 subtype: nvme subsystem 00:38:54.110 treq: not specified, sq flow control disable supported 00:38:54.110 portid: 1 00:38:54.110 trsvcid: 4420 00:38:54.110 subnqn: nqn.2024-02.io.spdk:cnode0 00:38:54.110 traddr: 192.168.100.8 00:38:54.110 eflags: none 00:38:54.110 rdma_prtype: not specified 00:38:54.110 rdma_qptype: connected 00:38:54.110 rdma_cms: rdma-cm 00:38:54.110 rdma_pkey: 0x0000 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:38:54.110 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.369 nvme0n1 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.369 07:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:54.628 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.629 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.888 nvme0n1 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:54.888 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:54.889 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:54.889 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:54.889 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:54.889 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:54.889 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:54.889 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:54.889 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.889 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.148 nvme0n1 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.148 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.409 nvme0n1 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:55.409 07:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:55.409 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:55.409 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.409 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.703 nvme0n1 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.703 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.962 nvme0n1 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.962 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:56.220 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.221 nvme0n1 00:38:56.221 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:56.479 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.480 07:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.739 nvme0n1 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.739 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.999 nvme0n1 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.999 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.258 nvme0n1 00:38:57.258 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.258 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:57.258 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:57.258 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.258 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.258 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:57.517 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.518 07:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.777 nvme0n1 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:57.777 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.778 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.037 nvme0n1 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.037 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.296 07:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.555 nvme0n1 00:38:58.555 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.555 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:58.555 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.555 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.556 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.815 nvme0n1 00:38:58.815 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.815 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:58.815 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:58.815 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.815 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.073 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.332 nvme0n1 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.332 07:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.591 nvme0n1 00:38:59.591 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.850 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.109 nvme0n1 00:39:00.109 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:00.109 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.368 07:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.936 nvme0n1 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.936 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.195 nvme0n1 00:39:01.195 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.195 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:01.195 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.195 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:01.195 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.195 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:01.454 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.455 07:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.714 nvme0n1 00:39:01.714 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.714 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:01.714 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:01.714 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.714 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.714 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.973 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.232 nvme0n1 00:39:02.232 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:02.232 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:02.232 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:02.232 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:02.232 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.232 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:02.492 07:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.061 nvme0n1 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.061 07:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.629 nvme0n1 00:39:03.629 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.629 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.629 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.629 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.629 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.889 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.458 nvme0n1 00:39:04.458 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:04.458 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.458 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.458 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:04.458 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.458 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:04.458 07:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:04.458 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:04.459 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.396 nvme0n1 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:05.396 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.397 07:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.965 nvme0n1 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:05.965 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.966 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.226 nvme0n1 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.226 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.485 nvme0n1 00:39:06.485 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.485 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:06.485 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:06.485 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.485 07:28:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.485 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.743 nvme0n1 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.743 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.001 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.002 nvme0n1 00:39:07.002 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.002 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:07.002 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.002 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:07.002 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.002 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.260 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.519 nvme0n1 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:07.519 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:07.520 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:07.520 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:07.520 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:07.520 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.520 07:28:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.520 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.778 nvme0n1 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.778 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.073 nvme0n1 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.073 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 nvme0n1 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.331 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.590 07:28:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:08.590 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:08.591 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:08.591 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.591 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.850 nvme0n1 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:08.850 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.109 nvme0n1 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.109 07:28:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.676 nvme0n1 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:09.676 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:09.677 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:09.677 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.677 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.935 nvme0n1 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:09.935 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.936 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.194 nvme0n1 00:39:10.194 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:10.194 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:10.194 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.194 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:10.194 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.194 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.452 07:28:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.711 nvme0n1 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:10.711 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:10.712 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:10.712 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.712 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.970 nvme0n1 00:39:10.970 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:10.970 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:10.970 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.970 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:10.970 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.229 07:28:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.487 nvme0n1 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:11.745 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.746 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.311 nvme0n1 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.311 07:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.569 nvme0n1 00:39:12.569 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.569 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:12.569 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.569 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:12.569 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.569 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:39:12.827 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.828 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.085 nvme0n1 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.085 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.342 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.342 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:13.342 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:39:13.342 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:13.342 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.343 07:28:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.600 nvme0n1 00:39:13.601 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.601 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:13.601 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:13.601 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.601 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.601 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:13.859 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.860 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.457 nvme0n1 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:14.457 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:14.458 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:14.458 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.458 07:28:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.025 nvme0n1 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.025 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.284 07:28:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.852 nvme0n1 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:15.852 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:15.853 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:15.853 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:15.853 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.853 07:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.420 nvme0n1 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.420 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.680 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.249 nvme0n1 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.249 07:28:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.509 nvme0n1 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:17.509 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:17.768 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:17.768 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.768 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.768 nvme0n1 00:39:17.768 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.768 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:17.768 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:17.768 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.768 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.769 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.769 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:17.769 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:17.769 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.769 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:18.027 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.028 nvme0n1 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.028 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.286 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:18.286 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:18.286 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.286 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.286 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.286 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.287 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.546 nvme0n1 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.546 07:28:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.546 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.806 nvme0n1 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.806 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.065 nvme0n1 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.065 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.323 nvme0n1 00:39:19.324 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.324 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:19.324 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:19.324 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.324 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.324 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.582 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:19.582 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:19.582 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.582 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.582 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.582 07:28:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.582 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.841 nvme0n1 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.841 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.100 nvme0n1 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:20.100 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.101 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.359 nvme0n1 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.359 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:20.360 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:20.618 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:20.618 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:20.618 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:20.618 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:20.618 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.618 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.618 07:28:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.618 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.905 nvme0n1 00:39:20.905 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.905 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.905 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.905 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.905 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.905 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.905 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.906 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.165 nvme0n1 00:39:21.165 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.165 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.166 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.425 07:28:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.684 nvme0n1 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:21.684 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.685 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.945 nvme0n1 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:21.945 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.204 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.463 nvme0n1 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:22.463 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.464 07:28:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.032 nvme0n1 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.032 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.033 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.601 nvme0n1 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.601 07:28:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.601 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.860 nvme0n1 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.860 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.119 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.378 nvme0n1 00:39:24.378 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.378 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:24.378 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.378 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.378 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:24.378 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.378 07:28:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:24.378 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:24.378 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.378 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.637 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.896 nvme0n1 00:39:24.896 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.896 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:24.896 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:24.896 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.896 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.896 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzMwODI5NmEzOTAyZDBlNWM4NjMwMTZlZDM4MTk5NWa5/nRo: 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: ]] 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM5YjBkN2MxYWI4YTRkMjYyZmI2MTQ4MDQzZTMyNzY2NjIwMDk2MGZmMWM2NDE3Yzk5OWIyOWZmZTBkYzVjNVU3xeA=: 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:25.156 07:28:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.724 nvme0n1 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:25.724 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:25.725 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.293 nvme0n1 00:39:26.293 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.552 07:28:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdhOTE5MmEwZTUxNzdiZmVjNDM5ODc3YjMwNDJlMWavo7Mw: 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: ]] 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAxNWE4ZjZhY2VlNGM1MzVhOWQwMmQ0NDE4ZDJmODTj0SmC: 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.552 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.120 nvme0n1 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViMWNlMmE4NTlhMjk3Y2FlMDcyZGU0NzdkNzI1NmM2NjU2OWVlOGI3YjhkM2M4F3oqSA==: 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: ]] 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjYwYjJjMDY3Y2NmYjA3MTJkZjkxODdiNjAzYThjN2P9aXKI: 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:27.120 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:27.121 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:27.121 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:27.121 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:27.121 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:27.121 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.121 07:28:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.058 nvme0n1 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJlNWMzZjM2OGU2YTVjMjY4NjA4ZDIyODZhNzA1Y2NmN2Q2NGU2MTQ5MzhkMzAzMTdmN2FkNDlkODQ3NzhjNAXvFjk=: 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:28.058 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:28.059 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:28.059 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.059 07:28:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.627 nvme0n1 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4OGRmMmRkM2RlMDMwMDUyZTU5YTZkNmNhODEyZWQwOGRmM2I2ZjA1NWIyYWEysjzO7A==: 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: ]] 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWFlMGVkNDI0YjZiYzAxOThhZTVlZTc3NDI2ZTJhYzQyY2RjOTg4OTE2YzNjYzEyqvbDMw==: 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:28.627 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.628 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:28.628 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.628 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:28.628 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.628 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.628 request: 00:39:28.628 { 00:39:28.628 "name": "nvme0", 00:39:28.628 "trtype": "rdma", 00:39:28.628 "traddr": "192.168.100.8", 00:39:28.628 "adrfam": "ipv4", 00:39:28.628 "trsvcid": "4420", 00:39:28.628 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:28.628 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:28.628 "prchk_reftag": false, 00:39:28.628 "prchk_guard": false, 00:39:28.628 "hdgst": false, 00:39:28.628 "ddgst": false, 00:39:28.886 "method": "bdev_nvme_attach_controller", 00:39:28.886 "req_id": 1 00:39:28.886 } 00:39:28.886 Got JSON-RPC error response 00:39:28.886 response: 00:39:28.886 { 00:39:28.886 "code": -5, 00:39:28.886 "message": "Input/output error" 00:39:28.886 } 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.886 request: 00:39:28.886 { 00:39:28.886 "name": "nvme0", 00:39:28.886 "trtype": "rdma", 00:39:28.886 "traddr": "192.168.100.8", 00:39:28.886 "adrfam": "ipv4", 00:39:28.886 "trsvcid": "4420", 00:39:28.886 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:28.886 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:28.886 "prchk_reftag": false, 00:39:28.886 "prchk_guard": false, 00:39:28.886 "hdgst": false, 00:39:28.886 "ddgst": false, 00:39:28.886 "dhchap_key": "key2", 00:39:28.886 "method": "bdev_nvme_attach_controller", 00:39:28.886 "req_id": 1 00:39:28.886 } 00:39:28.886 Got JSON-RPC error response 00:39:28.886 response: 00:39:28.886 { 00:39:28.886 "code": -5, 00:39:28.886 "message": "Input/output error" 00:39:28.886 } 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:28.886 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.887 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.144 request: 00:39:29.144 { 00:39:29.144 "name": "nvme0", 00:39:29.144 "trtype": "rdma", 00:39:29.144 "traddr": "192.168.100.8", 00:39:29.144 "adrfam": "ipv4", 00:39:29.144 "trsvcid": "4420", 00:39:29.144 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:29.144 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:29.144 "prchk_reftag": false, 00:39:29.144 "prchk_guard": false, 00:39:29.144 "hdgst": false, 00:39:29.144 "ddgst": false, 00:39:29.144 "dhchap_key": "key1", 00:39:29.144 "dhchap_ctrlr_key": "ckey2", 00:39:29.144 "method": "bdev_nvme_attach_controller", 00:39:29.144 "req_id": 1 00:39:29.144 } 00:39:29.144 Got JSON-RPC error response 00:39:29.144 response: 00:39:29.144 { 00:39:29.144 "code": -5, 00:39:29.144 "message": "Input/output error" 00:39:29.144 } 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:39:29.144 rmmod nvme_rdma 00:39:29.144 rmmod nvme_fabrics 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:39:29.144 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1896713 ']' 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1896713 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1896713 ']' 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1896713 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1896713 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1896713' 00:39:29.145 killing process with pid 1896713 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1896713 00:39:29.145 07:28:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1896713 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:39:30.522 07:28:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:39:34.714 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:34.714 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:36.091 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:39:36.091 07:28:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.cch /tmp/spdk.key-null.A2O /tmp/spdk.key-sha256.SKq /tmp/spdk.key-sha384.5eZ /tmp/spdk.key-sha512.zPP /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:39:36.091 07:28:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:39:39.381 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:39:39.381 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:39:39.639 00:39:39.639 real 1m1.852s 00:39:39.639 user 0m53.571s 00:39:39.639 sys 0m17.987s 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.639 ************************************ 00:39:39.639 END TEST nvmf_auth_host 00:39:39.639 ************************************ 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.639 ************************************ 00:39:39.639 START TEST nvmf_bdevperf 00:39:39.639 ************************************ 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:39:39.639 * Looking for test storage... 00:39:39.639 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:39.639 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:39.640 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:39.640 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:39.640 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:39:39.898 07:28:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:39:48.013 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:39:48.013 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:39:48.013 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:39:48.014 Found net devices under 0000:d9:00.0: mlx_0_0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:39:48.014 Found net devices under 0000:d9:00.1: mlx_0_1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:39:48.014 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:39:48.014 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:39:48.014 altname enp217s0f0np0 00:39:48.014 altname ens818f0np0 00:39:48.014 inet 192.168.100.8/24 scope global mlx_0_0 00:39:48.014 valid_lft forever preferred_lft forever 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:39:48.014 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:39:48.014 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:39:48.014 altname enp217s0f1np1 00:39:48.014 altname ens818f1np1 00:39:48.014 inet 192.168.100.9/24 scope global mlx_0_1 00:39:48.014 valid_lft forever preferred_lft forever 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:39:48.014 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:39:48.015 192.168.100.9' 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:39:48.015 192.168.100.9' 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:39:48.015 192.168.100.9' 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1912520 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1912520 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1912520 ']' 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:48.015 07:29:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:48.015 [2024-07-24 07:29:02.465299] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:39:48.015 [2024-07-24 07:29:02.465392] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:48.015 EAL: No free 2048 kB hugepages reported on node 1 00:39:48.015 [2024-07-24 07:29:02.615262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:48.274 [2024-07-24 07:29:02.841742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:48.274 [2024-07-24 07:29:02.841781] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:48.274 [2024-07-24 07:29:02.841799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:48.274 [2024-07-24 07:29:02.841811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:48.274 [2024-07-24 07:29:02.841823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:48.274 [2024-07-24 07:29:02.841959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:48.274 [2024-07-24 07:29:02.842026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:48.274 [2024-07-24 07:29:02.842035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:48.842 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:48.842 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:39:48.842 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:48.842 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:48.842 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:48.842 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:48.842 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:39:48.842 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:48.842 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:48.842 [2024-07-24 07:29:03.340716] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f9d72b63940) succeed. 00:39:48.842 [2024-07-24 07:29:03.350949] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f9d72b1d940) succeed. 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:49.101 Malloc0 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:49.101 [2024-07-24 07:29:03.694754] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:49.101 { 00:39:49.101 "params": { 00:39:49.101 "name": "Nvme$subsystem", 00:39:49.101 "trtype": "$TEST_TRANSPORT", 00:39:49.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:49.101 "adrfam": "ipv4", 00:39:49.101 "trsvcid": "$NVMF_PORT", 00:39:49.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:49.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:49.101 "hdgst": ${hdgst:-false}, 00:39:49.101 "ddgst": ${ddgst:-false} 00:39:49.101 }, 00:39:49.101 "method": "bdev_nvme_attach_controller" 00:39:49.101 } 00:39:49.101 EOF 00:39:49.101 )") 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:39:49.101 07:29:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:49.101 "params": { 00:39:49.101 "name": "Nvme1", 00:39:49.101 "trtype": "rdma", 00:39:49.101 "traddr": "192.168.100.8", 00:39:49.101 "adrfam": "ipv4", 00:39:49.101 "trsvcid": "4420", 00:39:49.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:49.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:49.101 "hdgst": false, 00:39:49.101 "ddgst": false 00:39:49.101 }, 00:39:49.101 "method": "bdev_nvme_attach_controller" 00:39:49.101 }' 00:39:49.360 [2024-07-24 07:29:03.780043] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:39:49.360 [2024-07-24 07:29:03.780142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912744 ] 00:39:49.360 EAL: No free 2048 kB hugepages reported on node 1 00:39:49.360 [2024-07-24 07:29:03.932071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.619 [2024-07-24 07:29:04.143714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:50.187 Running I/O for 1 seconds... 00:39:51.123 00:39:51.123 Latency(us) 00:39:51.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.123 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:51.123 Verification LBA range: start 0x0 length 0x4000 00:39:51.123 Nvme1n1 : 1.01 15753.14 61.54 0.00 0.00 8081.34 3303.01 19713.23 00:39:51.123 =================================================================================================================== 00:39:51.123 Total : 15753.14 61.54 0.00 0.00 8081.34 3303.01 19713.23 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1913281 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.502 { 00:39:52.502 "params": { 00:39:52.502 "name": "Nvme$subsystem", 00:39:52.502 "trtype": "$TEST_TRANSPORT", 00:39:52.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.502 "adrfam": "ipv4", 00:39:52.502 "trsvcid": "$NVMF_PORT", 00:39:52.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.502 "hdgst": ${hdgst:-false}, 00:39:52.502 "ddgst": ${ddgst:-false} 00:39:52.502 }, 00:39:52.502 "method": "bdev_nvme_attach_controller" 00:39:52.502 } 00:39:52.502 EOF 00:39:52.502 )") 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:39:52.502 07:29:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:52.502 "params": { 00:39:52.502 "name": "Nvme1", 00:39:52.502 "trtype": "rdma", 00:39:52.502 "traddr": "192.168.100.8", 00:39:52.502 "adrfam": "ipv4", 00:39:52.502 "trsvcid": "4420", 00:39:52.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:52.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:52.502 "hdgst": false, 00:39:52.502 "ddgst": false 00:39:52.502 }, 00:39:52.502 "method": "bdev_nvme_attach_controller" 00:39:52.502 }' 00:39:52.502 [2024-07-24 07:29:06.779449] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:39:52.502 [2024-07-24 07:29:06.779548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913281 ] 00:39:52.502 EAL: No free 2048 kB hugepages reported on node 1 00:39:52.502 [2024-07-24 07:29:06.925456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.761 [2024-07-24 07:29:07.146355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.020 Running I/O for 15 seconds... 00:39:55.555 07:29:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1912520 00:39:55.555 07:29:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:39:56.494 [2024-07-24 07:29:10.758698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.758758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.758798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.758812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.758828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.758841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.758855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.758868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.758882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.758895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.758910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.758922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.758936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.758948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.758962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.758974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.758987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.758999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.494 [2024-07-24 07:29:10.759315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.494 [2024-07-24 07:29:10.759329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.759979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.759992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.495 [2024-07-24 07:29:10.760352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.495 [2024-07-24 07:29:10.760366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:56.496 [2024-07-24 07:29:10.760378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fd000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fb000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f9000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f7000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f5000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f3000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f1000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ef000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ed000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075eb000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e9000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e7000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e5000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e3000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e1000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075df000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dd000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075db000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d9000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d7000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d5000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d3000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.760979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.760993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d1000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cf000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cd000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c9000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c7000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c5000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c3000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bd000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bb000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.496 [2024-07-24 07:29:10.761342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b7000 len:0x1000 key:0x182000 00:39:56.496 [2024-07-24 07:29:10.761354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b3000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a9000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a7000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a3000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759f000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007591000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758d000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007589000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.761982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.761996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007587000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.762008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.762022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007585000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.762035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.762050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007583000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.762062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.762076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007581000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.762089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.762103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757f000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.762115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.762129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757d000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.762143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.762157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757b000 len:0x1000 key:0x182000 00:39:56.497 [2024-07-24 07:29:10.762170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.764248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:56.497 [2024-07-24 07:29:10.764276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:56.497 [2024-07-24 07:29:10.764289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1552 len:8 PRP1 0x0 PRP2 0x0 00:39:56.497 [2024-07-24 07:29:10.764304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.497 [2024-07-24 07:29:10.764498] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20000b1ff140 was disconnected and freed. reset controller. 00:39:56.497 [2024-07-24 07:29:10.767545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:56.497 [2024-07-24 07:29:10.796095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:39:56.497 [2024-07-24 07:29:10.799065] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:39:56.497 [2024-07-24 07:29:10.799093] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:39:56.497 [2024-07-24 07:29:10.799105] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff800 00:39:57.434 [2024-07-24 07:29:11.803333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:39:57.434 [2024-07-24 07:29:11.803408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:57.434 [2024-07-24 07:29:11.803700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:57.434 [2024-07-24 07:29:11.803720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:57.434 [2024-07-24 07:29:11.803733] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:39:57.434 [2024-07-24 07:29:11.805739] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:39:57.434 [2024-07-24 07:29:11.806544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:57.434 [2024-07-24 07:29:11.818830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:57.434 [2024-07-24 07:29:11.822107] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:39:57.434 [2024-07-24 07:29:11.822133] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:39:57.434 [2024-07-24 07:29:11.822144] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff800 00:39:58.402 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1912520 Killed "${NVMF_APP[@]}" "$@" 00:39:58.402 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:39:58.402 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:58.402 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:58.402 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:58.402 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:58.402 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1914342 00:39:58.402 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1914342 00:39:58.403 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:58.403 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1914342 ']' 00:39:58.403 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:58.403 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:58.403 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:58.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:58.403 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:58.403 07:29:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:58.403 [2024-07-24 07:29:12.802084] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:39:58.403 [2024-07-24 07:29:12.802180] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:58.403 [2024-07-24 07:29:12.826136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:39:58.403 [2024-07-24 07:29:12.826178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:58.403 [2024-07-24 07:29:12.826384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:58.403 [2024-07-24 07:29:12.826400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:58.403 [2024-07-24 07:29:12.826413] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:39:58.403 [2024-07-24 07:29:12.828378] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:39:58.403 [2024-07-24 07:29:12.829463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:58.403 [2024-07-24 07:29:12.841622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:58.403 [2024-07-24 07:29:12.844584] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:39:58.403 [2024-07-24 07:29:12.844616] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:39:58.403 [2024-07-24 07:29:12.844633] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff800 00:39:58.403 EAL: No free 2048 kB hugepages reported on node 1 00:39:58.403 [2024-07-24 07:29:12.960348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:58.662 [2024-07-24 07:29:13.183713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:58.662 [2024-07-24 07:29:13.183756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:58.662 [2024-07-24 07:29:13.183773] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:58.662 [2024-07-24 07:29:13.183784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:58.662 [2024-07-24 07:29:13.183796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:58.662 [2024-07-24 07:29:13.183866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:58.662 [2024-07-24 07:29:13.183924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:58.662 [2024-07-24 07:29:13.183936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:59.230 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:59.230 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:39:59.230 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:59.230 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:59.230 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:59.230 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:59.230 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:39:59.230 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:59.230 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:59.230 [2024-07-24 07:29:13.679566] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f77ab270940) succeed. 00:39:59.230 [2024-07-24 07:29:13.689883] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f77ab22c940) succeed. 00:39:59.230 [2024-07-24 07:29:13.849046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:39:59.230 [2024-07-24 07:29:13.849094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:59.230 [2024-07-24 07:29:13.849294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:59.230 [2024-07-24 07:29:13.849311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:59.230 [2024-07-24 07:29:13.849324] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:39:59.230 [2024-07-24 07:29:13.852283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:59.230 [2024-07-24 07:29:13.855820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:59.230 [2024-07-24 07:29:13.859006] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:39:59.230 [2024-07-24 07:29:13.859037] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:39:59.230 [2024-07-24 07:29:13.859050] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff800 00:39:59.489 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:59.489 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:59.489 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:59.489 07:29:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:59.489 Malloc0 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:59.489 [2024-07-24 07:29:14.044232] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:59.489 07:29:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1913281 00:40:00.426 [2024-07-24 07:29:14.863367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:00.426 [2024-07-24 07:29:14.863405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:00.426 [2024-07-24 07:29:14.863601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:00.427 [2024-07-24 07:29:14.863617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:00.427 [2024-07-24 07:29:14.863636] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:40:00.427 [2024-07-24 07:29:14.863661] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:00.427 [2024-07-24 07:29:14.866592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:00.427 [2024-07-24 07:29:14.876824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:00.427 [2024-07-24 07:29:14.920914] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:08.548 00:40:08.548 Latency(us) 00:40:08.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.548 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:08.548 Verification LBA range: start 0x0 length 0x4000 00:40:08.548 Nvme1n1 : 15.01 10435.87 40.77 12670.15 0.00 5517.25 563.61 1060320.05 00:40:08.548 =================================================================================================================== 00:40:08.548 Total : 10435.87 40.77 12670.15 0.00 5517.25 563.61 1060320.05 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:40:09.483 rmmod nvme_rdma 00:40:09.483 rmmod nvme_fabrics 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:40:09.483 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1914342 ']' 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1914342 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1914342 ']' 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1914342 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1914342 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1914342' 00:40:09.484 killing process with pid 1914342 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1914342 00:40:09.484 07:29:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1914342 00:40:11.389 07:29:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:11.389 07:29:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:40:11.389 00:40:11.389 real 0m31.676s 00:40:11.389 user 1m18.873s 00:40:11.389 sys 0m8.182s 00:40:11.389 07:29:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:11.389 07:29:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:11.389 ************************************ 00:40:11.389 END TEST nvmf_bdevperf 00:40:11.389 ************************************ 00:40:11.389 07:29:25 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:40:11.389 07:29:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:11.389 07:29:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:11.389 07:29:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:11.389 ************************************ 00:40:11.389 START TEST nvmf_target_disconnect 00:40:11.389 ************************************ 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:40:11.390 * Looking for test storage... 00:40:11.390 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:11.390 07:29:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:11.390 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:40:11.650 07:29:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:40:19.770 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:40:19.770 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:40:19.770 Found net devices under 0000:d9:00.0: mlx_0_0 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:40:19.770 Found net devices under 0000:d9:00.1: mlx_0_1 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:40:19.770 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:40:19.771 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:19.771 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:40:19.771 altname enp217s0f0np0 00:40:19.771 altname ens818f0np0 00:40:19.771 inet 192.168.100.8/24 scope global mlx_0_0 00:40:19.771 valid_lft forever preferred_lft forever 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:40:19.771 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:19.771 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:40:19.771 altname enp217s0f1np1 00:40:19.771 altname ens818f1np1 00:40:19.771 inet 192.168.100.9/24 scope global mlx_0_1 00:40:19.771 valid_lft forever preferred_lft forever 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:40:19.771 192.168.100.9' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:40:19.771 192.168.100.9' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:40:19.771 192.168.100.9' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:19.771 ************************************ 00:40:19.771 START TEST nvmf_target_disconnect_tc1 00:40:19.771 ************************************ 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:19.771 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:40:19.772 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:19.772 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:40:19.772 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:19.772 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:40:19.772 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:40:19.772 07:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:40:19.772 EAL: No free 2048 kB hugepages reported on node 1 00:40:19.772 [2024-07-24 07:29:33.987352] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:19.772 [2024-07-24 07:29:33.987523] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:19.772 [2024-07-24 07:29:33.987584] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6ec0 00:40:20.710 [2024-07-24 07:29:34.991735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:20.710 [2024-07-24 07:29:34.991826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:40:20.710 [2024-07-24 07:29:34.991878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:40:20.710 [2024-07-24 07:29:34.992052] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:20.710 [2024-07-24 07:29:34.992099] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:40:20.710 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:40:20.710 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:40:20.710 Initializing NVMe Controllers 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:20.710 00:40:20.710 real 0m1.321s 00:40:20.710 user 0m0.910s 00:40:20.710 sys 0m0.399s 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:40:20.710 ************************************ 00:40:20.710 END TEST nvmf_target_disconnect_tc1 00:40:20.710 ************************************ 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:20.710 ************************************ 00:40:20.710 START TEST nvmf_target_disconnect_tc2 00:40:20.710 ************************************ 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1920546 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1920546 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1920546 ']' 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:20.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:20.710 07:29:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:20.710 [2024-07-24 07:29:35.270888] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:40:20.710 [2024-07-24 07:29:35.270978] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:20.969 EAL: No free 2048 kB hugepages reported on node 1 00:40:20.969 [2024-07-24 07:29:35.433121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:21.229 [2024-07-24 07:29:35.641920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:21.229 [2024-07-24 07:29:35.641961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:21.229 [2024-07-24 07:29:35.641976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:21.229 [2024-07-24 07:29:35.642003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:21.229 [2024-07-24 07:29:35.642015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:21.229 [2024-07-24 07:29:35.642211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:40:21.229 [2024-07-24 07:29:35.642293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:40:21.229 [2024-07-24 07:29:35.642355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:40:21.229 [2024-07-24 07:29:35.642383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:40:21.488 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:21.488 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:40:21.488 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:21.488 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:21.488 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:21.488 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:21.489 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:21.489 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.489 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:21.748 Malloc0 00:40:21.748 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.748 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:40:21.748 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.748 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:21.748 [2024-07-24 07:29:36.218840] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7f01e5505940) succeed. 00:40:21.748 [2024-07-24 07:29:36.228598] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7f01e53bd940) succeed. 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:22.008 [2024-07-24 07:29:36.558459] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1920723 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:40:22.008 07:29:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:40:22.267 EAL: No free 2048 kB hugepages reported on node 1 00:40:24.226 07:29:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1920546 00:40:24.226 07:29:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Write completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 Read completed with error (sct=0, sc=8) 00:40:25.605 starting I/O failed 00:40:25.605 [2024-07-24 07:29:39.863918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:26.174 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1920546 Killed "${NVMF_APP[@]}" "$@" 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1921496 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1921496 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1921496 ']' 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:26.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:26.174 07:29:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:26.174 [2024-07-24 07:29:40.672437] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:40:26.174 [2024-07-24 07:29:40.672529] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:26.174 EAL: No free 2048 kB hugepages reported on node 1 00:40:26.434 [2024-07-24 07:29:40.844239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Write completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 Read completed with error (sct=0, sc=8) 00:40:26.434 starting I/O failed 00:40:26.434 [2024-07-24 07:29:40.871477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:26.693 [2024-07-24 07:29:41.066407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:26.693 [2024-07-24 07:29:41.066451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:26.693 [2024-07-24 07:29:41.066465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:26.694 [2024-07-24 07:29:41.066477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:26.694 [2024-07-24 07:29:41.066489] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:26.694 [2024-07-24 07:29:41.066660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:40:26.694 [2024-07-24 07:29:41.066774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:40:26.694 [2024-07-24 07:29:41.066864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:40:26.694 [2024-07-24 07:29:41.066892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:40:26.953 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:26.953 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:40:26.953 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:26.953 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:26.953 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:26.953 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:26.953 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:26.953 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:26.953 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:27.212 Malloc0 00:40:27.212 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.212 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:40:27.212 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:27.212 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:27.212 [2024-07-24 07:29:41.635080] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7f58b39e4940) succeed. 00:40:27.212 [2024-07-24 07:29:41.644922] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7f58b399d940) succeed. 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Read completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.472 Write completed with error (sct=0, sc=8) 00:40:27.472 starting I/O failed 00:40:27.473 [2024-07-24 07:29:41.876974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:27.473 [2024-07-24 07:29:41.967130] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.473 07:29:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1920723 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Write completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 Read completed with error (sct=0, sc=8) 00:40:28.412 starting I/O failed 00:40:28.412 [2024-07-24 07:29:42.882590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.412 [2024-07-24 07:29:42.887931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.412 [2024-07-24 07:29:42.888016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.412 [2024-07-24 07:29:42.888047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.412 [2024-07-24 07:29:42.888063] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.412 [2024-07-24 07:29:42.888078] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.412 [2024-07-24 07:29:42.898027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.412 qpair failed and we were unable to recover it. 00:40:28.412 [2024-07-24 07:29:42.907700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.412 [2024-07-24 07:29:42.907778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.412 [2024-07-24 07:29:42.907804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.412 [2024-07-24 07:29:42.907820] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.412 [2024-07-24 07:29:42.907832] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.412 [2024-07-24 07:29:42.918197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.412 qpair failed and we were unable to recover it. 00:40:28.412 [2024-07-24 07:29:42.927853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.412 [2024-07-24 07:29:42.927923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.412 [2024-07-24 07:29:42.927950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.412 [2024-07-24 07:29:42.927964] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.412 [2024-07-24 07:29:42.927977] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.412 [2024-07-24 07:29:42.938265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.412 qpair failed and we were unable to recover it. 00:40:28.412 [2024-07-24 07:29:42.947862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.412 [2024-07-24 07:29:42.947926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.412 [2024-07-24 07:29:42.947954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.412 [2024-07-24 07:29:42.947970] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.412 [2024-07-24 07:29:42.947981] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.412 [2024-07-24 07:29:42.958465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.412 qpair failed and we were unable to recover it. 00:40:28.412 [2024-07-24 07:29:42.968092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.412 [2024-07-24 07:29:42.968157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.412 [2024-07-24 07:29:42.968183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.412 [2024-07-24 07:29:42.968197] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.412 [2024-07-24 07:29:42.968210] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.412 [2024-07-24 07:29:42.978340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.412 qpair failed and we were unable to recover it. 00:40:28.412 [2024-07-24 07:29:42.987926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.412 [2024-07-24 07:29:42.987984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.412 [2024-07-24 07:29:42.988008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.412 [2024-07-24 07:29:42.988024] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.412 [2024-07-24 07:29:42.988035] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.412 [2024-07-24 07:29:42.998433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.412 qpair failed and we were unable to recover it. 00:40:28.412 [2024-07-24 07:29:43.007976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.412 [2024-07-24 07:29:43.008035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.412 [2024-07-24 07:29:43.008063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.412 [2024-07-24 07:29:43.008077] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.413 [2024-07-24 07:29:43.008091] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.413 [2024-07-24 07:29:43.018486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.413 qpair failed and we were unable to recover it. 00:40:28.413 [2024-07-24 07:29:43.028179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.413 [2024-07-24 07:29:43.028239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.413 [2024-07-24 07:29:43.028263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.413 [2024-07-24 07:29:43.028279] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.413 [2024-07-24 07:29:43.028294] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.413 [2024-07-24 07:29:43.038524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.413 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.048184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.048244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.048271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.048284] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.048300] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.058838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.068292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.068355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.068378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.068394] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.068405] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.078704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.088304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.088363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.088389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.088403] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.088416] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.098726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.108362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.108430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.108454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.108469] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.108481] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.118816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.128355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.128415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.128441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.128455] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.128468] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.139031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.148474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.148535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.148559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.148575] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.148587] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.158823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.168457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.168514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.168540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.168555] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.168569] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.178912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.188545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.188607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.188636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.188655] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.188666] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.199045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.208571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.208635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.208661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.208677] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.208691] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.219323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.228643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.228705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.228729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.228745] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.228756] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.239011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.248636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.673 [2024-07-24 07:29:43.248697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.673 [2024-07-24 07:29:43.248724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.673 [2024-07-24 07:29:43.248738] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.673 [2024-07-24 07:29:43.248751] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.673 [2024-07-24 07:29:43.259332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.673 qpair failed and we were unable to recover it. 00:40:28.673 [2024-07-24 07:29:43.268643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.674 [2024-07-24 07:29:43.268704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.674 [2024-07-24 07:29:43.268728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.674 [2024-07-24 07:29:43.268745] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.674 [2024-07-24 07:29:43.268756] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.674 [2024-07-24 07:29:43.279144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.674 qpair failed and we were unable to recover it. 00:40:28.674 [2024-07-24 07:29:43.288731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.674 [2024-07-24 07:29:43.288797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.674 [2024-07-24 07:29:43.288823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.674 [2024-07-24 07:29:43.288837] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.674 [2024-07-24 07:29:43.288851] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.301939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.308771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.308839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.308863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.308879] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.308890] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.319341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.328826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.328883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.328920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.328933] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.328946] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.339347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.348903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.348962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.348986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.349001] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.349013] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.359593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.368997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.369060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.369090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.369103] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.369119] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.379631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.388961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.389024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.389054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.389069] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.389081] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.399667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.409103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.409168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.409196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.409210] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.409223] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.419799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.429126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.429190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.429218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.429233] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.429245] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.439769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.449246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.449303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.449337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.449351] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.449364] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.459989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.469344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.469407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.469434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.469450] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.469464] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.479921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.489316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.489377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.489405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.489421] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.489434] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.499874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.509460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.509523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.509551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.509569] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.509581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.520227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.529519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.529582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.529611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.529630] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.529644] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.540188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:28.933 [2024-07-24 07:29:43.549652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:28.933 [2024-07-24 07:29:43.549715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:28.933 [2024-07-24 07:29:43.549744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:28.933 [2024-07-24 07:29:43.549761] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:28.933 [2024-07-24 07:29:43.549772] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:28.933 [2024-07-24 07:29:43.560566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:28.933 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.569552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.569620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.569652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.569665] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.569683] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.580152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.589732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.589793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.589816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.589832] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.589843] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.600401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.609667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.609730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.609761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.609775] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.609788] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.620362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.629832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.629898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.629923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.629938] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.629950] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.640314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.649931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.649989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.650027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.650044] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.650057] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.660299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.669944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.670004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.670034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.670052] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.670063] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.680575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.689949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.690009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.690042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.690056] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.690073] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.700749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.710123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.710186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.710216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.710232] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.710243] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.720588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.730187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.730249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.730279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.730292] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.730307] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.740618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.750206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.750272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.750297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.750313] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.750325] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.760798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.770408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.770462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.770488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.770502] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.770515] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.780595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.790314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.790382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.790407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.790422] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.193 [2024-07-24 07:29:43.790434] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.193 [2024-07-24 07:29:43.800993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.193 qpair failed and we were unable to recover it. 00:40:29.193 [2024-07-24 07:29:43.810383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.193 [2024-07-24 07:29:43.810441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.193 [2024-07-24 07:29:43.810475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.193 [2024-07-24 07:29:43.810489] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.194 [2024-07-24 07:29:43.810504] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.194 [2024-07-24 07:29:43.820982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.194 qpair failed and we were unable to recover it. 00:40:29.453 [2024-07-24 07:29:43.830483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.453 [2024-07-24 07:29:43.830544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.453 [2024-07-24 07:29:43.830576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.453 [2024-07-24 07:29:43.830594] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.453 [2024-07-24 07:29:43.830605] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.453 [2024-07-24 07:29:43.841012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.453 qpair failed and we were unable to recover it. 00:40:29.453 [2024-07-24 07:29:43.850518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.453 [2024-07-24 07:29:43.850576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.453 [2024-07-24 07:29:43.850602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.453 [2024-07-24 07:29:43.850616] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.453 [2024-07-24 07:29:43.850640] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.453 [2024-07-24 07:29:43.861204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.453 qpair failed and we were unable to recover it. 00:40:29.453 [2024-07-24 07:29:43.870604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.453 [2024-07-24 07:29:43.870669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.453 [2024-07-24 07:29:43.870692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.453 [2024-07-24 07:29:43.870708] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.453 [2024-07-24 07:29:43.870720] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.453 [2024-07-24 07:29:43.880855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.453 qpair failed and we were unable to recover it. 00:40:29.453 [2024-07-24 07:29:43.890617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.453 [2024-07-24 07:29:43.890681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.453 [2024-07-24 07:29:43.890708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.453 [2024-07-24 07:29:43.890721] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.453 [2024-07-24 07:29:43.890737] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.453 [2024-07-24 07:29:43.900897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.453 qpair failed and we were unable to recover it. 00:40:29.453 [2024-07-24 07:29:43.910633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.453 [2024-07-24 07:29:43.910694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.453 [2024-07-24 07:29:43.910718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.453 [2024-07-24 07:29:43.910738] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.453 [2024-07-24 07:29:43.910752] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.453 [2024-07-24 07:29:43.923076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.453 qpair failed and we were unable to recover it. 00:40:29.453 [2024-07-24 07:29:43.930634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.453 [2024-07-24 07:29:43.930695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.453 [2024-07-24 07:29:43.930724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.453 [2024-07-24 07:29:43.930740] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.453 [2024-07-24 07:29:43.930753] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.453 [2024-07-24 07:29:43.941227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.453 qpair failed and we were unable to recover it. 00:40:29.453 [2024-07-24 07:29:43.950802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.453 [2024-07-24 07:29:43.950862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.453 [2024-07-24 07:29:43.950894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.453 [2024-07-24 07:29:43.950910] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.453 [2024-07-24 07:29:43.950921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.453 [2024-07-24 07:29:43.961379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.453 qpair failed and we were unable to recover it. 00:40:29.453 [2024-07-24 07:29:43.970859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.454 [2024-07-24 07:29:43.970920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.454 [2024-07-24 07:29:43.970958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.454 [2024-07-24 07:29:43.970972] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.454 [2024-07-24 07:29:43.970986] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.454 [2024-07-24 07:29:43.981473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.454 qpair failed and we were unable to recover it. 00:40:29.454 [2024-07-24 07:29:43.990882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.454 [2024-07-24 07:29:43.990945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.454 [2024-07-24 07:29:43.990975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.454 [2024-07-24 07:29:43.990993] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.454 [2024-07-24 07:29:43.991005] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.454 [2024-07-24 07:29:44.001394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.454 qpair failed and we were unable to recover it. 00:40:29.454 [2024-07-24 07:29:44.010941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.454 [2024-07-24 07:29:44.011002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.454 [2024-07-24 07:29:44.011029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.454 [2024-07-24 07:29:44.011042] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.454 [2024-07-24 07:29:44.011060] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.454 [2024-07-24 07:29:44.021319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.454 qpair failed and we were unable to recover it. 00:40:29.454 [2024-07-24 07:29:44.030934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.454 [2024-07-24 07:29:44.030990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.454 [2024-07-24 07:29:44.031013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.454 [2024-07-24 07:29:44.031029] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.454 [2024-07-24 07:29:44.031041] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.454 [2024-07-24 07:29:44.041439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.454 qpair failed and we were unable to recover it. 00:40:29.454 [2024-07-24 07:29:44.051068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.454 [2024-07-24 07:29:44.051135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.454 [2024-07-24 07:29:44.051163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.454 [2024-07-24 07:29:44.051176] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.454 [2024-07-24 07:29:44.051191] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.454 [2024-07-24 07:29:44.061690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.454 qpair failed and we were unable to recover it. 00:40:29.454 [2024-07-24 07:29:44.071266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.454 [2024-07-24 07:29:44.071338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.454 [2024-07-24 07:29:44.071363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.454 [2024-07-24 07:29:44.071379] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.454 [2024-07-24 07:29:44.071390] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.454 [2024-07-24 07:29:44.081501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.454 qpair failed and we were unable to recover it. 00:40:29.714 [2024-07-24 07:29:44.091035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.714 [2024-07-24 07:29:44.091092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.714 [2024-07-24 07:29:44.091119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.714 [2024-07-24 07:29:44.091136] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.714 [2024-07-24 07:29:44.091150] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.714 [2024-07-24 07:29:44.101399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.714 qpair failed and we were unable to recover it. 00:40:29.714 [2024-07-24 07:29:44.111135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.714 [2024-07-24 07:29:44.111200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.714 [2024-07-24 07:29:44.111228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.714 [2024-07-24 07:29:44.111243] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.714 [2024-07-24 07:29:44.111254] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.714 [2024-07-24 07:29:44.121755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.714 qpair failed and we were unable to recover it. 00:40:29.714 [2024-07-24 07:29:44.131258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.714 [2024-07-24 07:29:44.131312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.714 [2024-07-24 07:29:44.131338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.714 [2024-07-24 07:29:44.131352] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.714 [2024-07-24 07:29:44.131367] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.714 [2024-07-24 07:29:44.141721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.714 qpair failed and we were unable to recover it. 00:40:29.714 [2024-07-24 07:29:44.151322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.714 [2024-07-24 07:29:44.151383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.714 [2024-07-24 07:29:44.151415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.714 [2024-07-24 07:29:44.151434] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.714 [2024-07-24 07:29:44.151446] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.714 [2024-07-24 07:29:44.161978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.714 qpair failed and we were unable to recover it. 00:40:29.714 [2024-07-24 07:29:44.171359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.714 [2024-07-24 07:29:44.171418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.714 [2024-07-24 07:29:44.171449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.714 [2024-07-24 07:29:44.171463] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.714 [2024-07-24 07:29:44.171474] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.714 [2024-07-24 07:29:44.181857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.714 qpair failed and we were unable to recover it. 00:40:29.714 [2024-07-24 07:29:44.191338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.714 [2024-07-24 07:29:44.191393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.714 [2024-07-24 07:29:44.191418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.714 [2024-07-24 07:29:44.191432] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.714 [2024-07-24 07:29:44.191443] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.714 [2024-07-24 07:29:44.202166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.714 qpair failed and we were unable to recover it. 00:40:29.714 [2024-07-24 07:29:44.211528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.714 [2024-07-24 07:29:44.211585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.715 [2024-07-24 07:29:44.211608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.715 [2024-07-24 07:29:44.211623] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.715 [2024-07-24 07:29:44.211640] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.715 [2024-07-24 07:29:44.221750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.715 qpair failed and we were unable to recover it. 00:40:29.715 [2024-07-24 07:29:44.231495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.715 [2024-07-24 07:29:44.231547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.715 [2024-07-24 07:29:44.231571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.715 [2024-07-24 07:29:44.231589] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.715 [2024-07-24 07:29:44.231601] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.715 [2024-07-24 07:29:44.241990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.715 qpair failed and we were unable to recover it. 00:40:29.715 [2024-07-24 07:29:44.251593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.715 [2024-07-24 07:29:44.251656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.715 [2024-07-24 07:29:44.251686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.715 [2024-07-24 07:29:44.251700] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.715 [2024-07-24 07:29:44.251711] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.715 [2024-07-24 07:29:44.262081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.715 qpair failed and we were unable to recover it. 00:40:29.715 [2024-07-24 07:29:44.271688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.715 [2024-07-24 07:29:44.271742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.715 [2024-07-24 07:29:44.271770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.715 [2024-07-24 07:29:44.271784] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.715 [2024-07-24 07:29:44.271795] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.715 [2024-07-24 07:29:44.282196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.715 qpair failed and we were unable to recover it. 00:40:29.715 [2024-07-24 07:29:44.291796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.715 [2024-07-24 07:29:44.291853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.715 [2024-07-24 07:29:44.291888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.715 [2024-07-24 07:29:44.291902] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.715 [2024-07-24 07:29:44.291913] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.715 [2024-07-24 07:29:44.302298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.715 qpair failed and we were unable to recover it. 00:40:29.715 [2024-07-24 07:29:44.311786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.715 [2024-07-24 07:29:44.311842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.715 [2024-07-24 07:29:44.311866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.715 [2024-07-24 07:29:44.311881] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.715 [2024-07-24 07:29:44.311892] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.715 [2024-07-24 07:29:44.322447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.715 qpair failed and we were unable to recover it. 00:40:29.715 [2024-07-24 07:29:44.331882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.715 [2024-07-24 07:29:44.331936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.715 [2024-07-24 07:29:44.331961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.715 [2024-07-24 07:29:44.331975] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.715 [2024-07-24 07:29:44.331987] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.715 [2024-07-24 07:29:44.342601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.715 qpair failed and we were unable to recover it. 00:40:29.975 [2024-07-24 07:29:44.351922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.975 [2024-07-24 07:29:44.351982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.975 [2024-07-24 07:29:44.352014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.975 [2024-07-24 07:29:44.352027] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.975 [2024-07-24 07:29:44.352042] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.975 [2024-07-24 07:29:44.362465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.975 qpair failed and we were unable to recover it. 00:40:29.975 [2024-07-24 07:29:44.371972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.975 [2024-07-24 07:29:44.372033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.975 [2024-07-24 07:29:44.372063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.975 [2024-07-24 07:29:44.372077] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.975 [2024-07-24 07:29:44.372090] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.975 [2024-07-24 07:29:44.383029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.975 qpair failed and we were unable to recover it. 00:40:29.975 [2024-07-24 07:29:44.392009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.975 [2024-07-24 07:29:44.392064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.975 [2024-07-24 07:29:44.392088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.975 [2024-07-24 07:29:44.392107] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.975 [2024-07-24 07:29:44.392118] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.975 [2024-07-24 07:29:44.402459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.975 qpair failed and we were unable to recover it. 00:40:29.975 [2024-07-24 07:29:44.412042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.975 [2024-07-24 07:29:44.412096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.975 [2024-07-24 07:29:44.412119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.975 [2024-07-24 07:29:44.412136] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.975 [2024-07-24 07:29:44.412148] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.975 [2024-07-24 07:29:44.422630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.975 qpair failed and we were unable to recover it. 00:40:29.975 [2024-07-24 07:29:44.432166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.975 [2024-07-24 07:29:44.432223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.975 [2024-07-24 07:29:44.432246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.975 [2024-07-24 07:29:44.432260] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.975 [2024-07-24 07:29:44.432271] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.976 [2024-07-24 07:29:44.442600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.976 qpair failed and we were unable to recover it. 00:40:29.976 [2024-07-24 07:29:44.452163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.976 [2024-07-24 07:29:44.452223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.976 [2024-07-24 07:29:44.452247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.976 [2024-07-24 07:29:44.452261] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.976 [2024-07-24 07:29:44.452272] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.976 [2024-07-24 07:29:44.462540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.976 qpair failed and we were unable to recover it. 00:40:29.976 [2024-07-24 07:29:44.472273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.976 [2024-07-24 07:29:44.472326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.976 [2024-07-24 07:29:44.472349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.976 [2024-07-24 07:29:44.472366] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.976 [2024-07-24 07:29:44.472377] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.976 [2024-07-24 07:29:44.482838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.976 qpair failed and we were unable to recover it. 00:40:29.976 [2024-07-24 07:29:44.492325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.976 [2024-07-24 07:29:44.492384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.976 [2024-07-24 07:29:44.492407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.976 [2024-07-24 07:29:44.492420] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.976 [2024-07-24 07:29:44.492431] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.976 [2024-07-24 07:29:44.502604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.976 qpair failed and we were unable to recover it. 00:40:29.976 [2024-07-24 07:29:44.512392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.976 [2024-07-24 07:29:44.512449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.976 [2024-07-24 07:29:44.512473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.976 [2024-07-24 07:29:44.512486] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.976 [2024-07-24 07:29:44.512497] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.976 [2024-07-24 07:29:44.522997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.976 qpair failed and we were unable to recover it. 00:40:29.976 [2024-07-24 07:29:44.534034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.976 [2024-07-24 07:29:44.534094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.976 [2024-07-24 07:29:44.534123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.976 [2024-07-24 07:29:44.534140] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.976 [2024-07-24 07:29:44.534151] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.976 [2024-07-24 07:29:44.542821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.976 qpair failed and we were unable to recover it. 00:40:29.976 [2024-07-24 07:29:44.552500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.976 [2024-07-24 07:29:44.552552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.976 [2024-07-24 07:29:44.552576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.976 [2024-07-24 07:29:44.552594] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.976 [2024-07-24 07:29:44.552605] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.976 [2024-07-24 07:29:44.562969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.976 qpair failed and we were unable to recover it. 00:40:29.976 [2024-07-24 07:29:44.572550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.976 [2024-07-24 07:29:44.572607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.976 [2024-07-24 07:29:44.572643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.976 [2024-07-24 07:29:44.572658] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.976 [2024-07-24 07:29:44.572669] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.976 [2024-07-24 07:29:44.582944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.976 qpair failed and we were unable to recover it. 00:40:29.976 [2024-07-24 07:29:44.592676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:29.976 [2024-07-24 07:29:44.592734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:29.976 [2024-07-24 07:29:44.592759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:29.976 [2024-07-24 07:29:44.592772] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:29.976 [2024-07-24 07:29:44.592784] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:29.976 [2024-07-24 07:29:44.603038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:29.976 qpair failed and we were unable to recover it. 00:40:30.236 [2024-07-24 07:29:44.612714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.236 [2024-07-24 07:29:44.612764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.236 [2024-07-24 07:29:44.612788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.236 [2024-07-24 07:29:44.612801] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.236 [2024-07-24 07:29:44.612812] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.236 [2024-07-24 07:29:44.622932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.236 qpair failed and we were unable to recover it. 00:40:30.236 [2024-07-24 07:29:44.632668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.236 [2024-07-24 07:29:44.632720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.236 [2024-07-24 07:29:44.632744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.236 [2024-07-24 07:29:44.632757] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.236 [2024-07-24 07:29:44.632768] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.236 [2024-07-24 07:29:44.643086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.236 qpair failed and we were unable to recover it. 00:40:30.236 [2024-07-24 07:29:44.652828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.236 [2024-07-24 07:29:44.652883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.652907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.652920] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.652930] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.663154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.672874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.672929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.672953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.672966] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.672978] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.684963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.692976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.693029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.693053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.693067] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.693078] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.703471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.712997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.713055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.713088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.713102] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.713113] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.723369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.732995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.733048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.733072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.733087] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.733098] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.743337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.753062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.753116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.753140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.753155] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.753167] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.763558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.773182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.773232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.773255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.773272] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.773283] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.783554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.793203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.793258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.793282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.793296] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.793312] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.803518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.813192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.813257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.813287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.813301] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.813312] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.823801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.833316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.833375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.833405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.833418] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.833430] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.844180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.237 [2024-07-24 07:29:44.853286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.237 [2024-07-24 07:29:44.853339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.237 [2024-07-24 07:29:44.853363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.237 [2024-07-24 07:29:44.853379] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.237 [2024-07-24 07:29:44.853390] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.237 [2024-07-24 07:29:44.864237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.237 qpair failed and we were unable to recover it. 00:40:30.504 [2024-07-24 07:29:44.873441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.504 [2024-07-24 07:29:44.873494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.504 [2024-07-24 07:29:44.873518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.504 [2024-07-24 07:29:44.873535] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.504 [2024-07-24 07:29:44.873546] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.504 [2024-07-24 07:29:44.883984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.504 qpair failed and we were unable to recover it. 00:40:30.504 [2024-07-24 07:29:44.893510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.504 [2024-07-24 07:29:44.893576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.504 [2024-07-24 07:29:44.893604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.504 [2024-07-24 07:29:44.893618] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.504 [2024-07-24 07:29:44.893643] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.504 [2024-07-24 07:29:44.903989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.504 qpair failed and we were unable to recover it. 00:40:30.504 [2024-07-24 07:29:44.913659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.504 [2024-07-24 07:29:44.913717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.504 [2024-07-24 07:29:44.913749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.504 [2024-07-24 07:29:44.913762] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.504 [2024-07-24 07:29:44.913774] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.504 [2024-07-24 07:29:44.924060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.504 qpair failed and we were unable to recover it. 00:40:30.504 [2024-07-24 07:29:44.933612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.504 [2024-07-24 07:29:44.933669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.504 [2024-07-24 07:29:44.933693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.504 [2024-07-24 07:29:44.933707] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.504 [2024-07-24 07:29:44.933718] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.504 [2024-07-24 07:29:44.944061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.504 qpair failed and we were unable to recover it. 00:40:30.504 [2024-07-24 07:29:44.953666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.504 [2024-07-24 07:29:44.953721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.504 [2024-07-24 07:29:44.953744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.504 [2024-07-24 07:29:44.953760] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.504 [2024-07-24 07:29:44.953771] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.504 [2024-07-24 07:29:44.964395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.504 qpair failed and we were unable to recover it. 00:40:30.504 [2024-07-24 07:29:44.973795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.505 [2024-07-24 07:29:44.973855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.505 [2024-07-24 07:29:44.973885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.505 [2024-07-24 07:29:44.973902] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.505 [2024-07-24 07:29:44.973913] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.505 [2024-07-24 07:29:44.984093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.505 qpair failed and we were unable to recover it. 00:40:30.505 [2024-07-24 07:29:44.993844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.505 [2024-07-24 07:29:44.993898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.505 [2024-07-24 07:29:44.993922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.505 [2024-07-24 07:29:44.993935] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.505 [2024-07-24 07:29:44.993947] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.505 [2024-07-24 07:29:45.004487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.505 qpair failed and we were unable to recover it. 00:40:30.505 [2024-07-24 07:29:45.013918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.505 [2024-07-24 07:29:45.013971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.505 [2024-07-24 07:29:45.013995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.505 [2024-07-24 07:29:45.014008] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.505 [2024-07-24 07:29:45.014020] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.505 [2024-07-24 07:29:45.024377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.505 qpair failed and we were unable to recover it. 00:40:30.505 [2024-07-24 07:29:45.033862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.505 [2024-07-24 07:29:45.033919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.505 [2024-07-24 07:29:45.033943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.505 [2024-07-24 07:29:45.033956] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.505 [2024-07-24 07:29:45.033968] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.505 [2024-07-24 07:29:45.044612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.505 qpair failed and we were unable to recover it. 00:40:30.505 [2024-07-24 07:29:45.054074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.505 [2024-07-24 07:29:45.054132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.505 [2024-07-24 07:29:45.054156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.505 [2024-07-24 07:29:45.054170] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.505 [2024-07-24 07:29:45.054181] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.505 [2024-07-24 07:29:45.064473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.505 qpair failed and we were unable to recover it. 00:40:30.505 [2024-07-24 07:29:45.074004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.505 [2024-07-24 07:29:45.074055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.505 [2024-07-24 07:29:45.074079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.505 [2024-07-24 07:29:45.074095] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.505 [2024-07-24 07:29:45.074107] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.505 [2024-07-24 07:29:45.084533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.505 qpair failed and we were unable to recover it. 00:40:30.505 [2024-07-24 07:29:45.094050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.505 [2024-07-24 07:29:45.094099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.505 [2024-07-24 07:29:45.094123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.505 [2024-07-24 07:29:45.094142] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.505 [2024-07-24 07:29:45.094153] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.505 [2024-07-24 07:29:45.104562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.505 qpair failed and we were unable to recover it. 00:40:30.505 [2024-07-24 07:29:45.114236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.505 [2024-07-24 07:29:45.114291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.505 [2024-07-24 07:29:45.114314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.505 [2024-07-24 07:29:45.114330] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.505 [2024-07-24 07:29:45.114341] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.505 [2024-07-24 07:29:45.124549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.505 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.134204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.134260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.134284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.134297] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.134309] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.144806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.154207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.154262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.154289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.154302] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.154313] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.164733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.174398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.174451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.174475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.174489] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.174500] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.184613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.194323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.194376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.194400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.194420] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.194432] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.204805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.214551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.214600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.214624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.214651] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.214663] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.224870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.234525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.234581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.234604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.234618] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.234638] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.244825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.254550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.254606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.254638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.254653] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.254664] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.265153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.274685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.274738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.274762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.274777] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.274788] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.285217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.294678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.294737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.294761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.294774] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.294785] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.305231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.314841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.314900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.314932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.314945] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.314957] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.325151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.334818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.334876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.334901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.334914] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.334925] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.345252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.354926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.354985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.355014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.355028] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.355039] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.365362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.374925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.374975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.374998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.375018] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.375029] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:30.767 [2024-07-24 07:29:45.385560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:30.767 qpair failed and we were unable to recover it. 00:40:30.767 [2024-07-24 07:29:45.395043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:30.767 [2024-07-24 07:29:45.395095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:30.767 [2024-07-24 07:29:45.395117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:30.767 [2024-07-24 07:29:45.395135] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:30.767 [2024-07-24 07:29:45.395146] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.025 [2024-07-24 07:29:45.405320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.025 qpair failed and we were unable to recover it. 00:40:31.025 [2024-07-24 07:29:45.415217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.025 [2024-07-24 07:29:45.415268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.025 [2024-07-24 07:29:45.415292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.025 [2024-07-24 07:29:45.415315] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.025 [2024-07-24 07:29:45.415326] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.425378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.435182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.435233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.435256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.435273] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.435284] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.446561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.455365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.455424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.455447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.455461] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.455472] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.465543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.475282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.475339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.475371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.475384] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.475396] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.486148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.495352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.495411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.495442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.495455] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.495466] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.505731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.515483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.515537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.515560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.515576] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.515587] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.525721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.535551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.535602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.535631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.535645] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.535657] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.545669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.555526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.555582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.555605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.555619] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.555636] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.565819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.575590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.575648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.575680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.575694] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.575705] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.585970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.595727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.595783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.595818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.595831] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.595843] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.606098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.615757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.615817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.615841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.615854] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.615866] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.626144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.026 [2024-07-24 07:29:45.635846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.026 [2024-07-24 07:29:45.635904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.026 [2024-07-24 07:29:45.635927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.026 [2024-07-24 07:29:45.635941] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.026 [2024-07-24 07:29:45.635952] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.026 [2024-07-24 07:29:45.646316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.026 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.655879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.655933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.655957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.655976] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.655987] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.666550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.675942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.676000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.676023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.676036] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.676054] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.686406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.696062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.696119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.696142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.696156] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.696167] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.706206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.716153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.716212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.716243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.716257] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.716268] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.726695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.736249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.736301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.736325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.736343] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.736354] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.746722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.756263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.756318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.756343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.756358] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.756369] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.766757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.776300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.776363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.776387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.776400] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.776412] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.786642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.796348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.796403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.796427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.796440] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.796452] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.806948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.816357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.816416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.816439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.816453] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.816464] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.826727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.836428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.836482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.836506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.836524] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.836535] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.846886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.856472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.856522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.856546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.856570] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.856581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.866814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.876608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.876670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.876693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.876707] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.876718] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.887213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.286 [2024-07-24 07:29:45.896608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.286 [2024-07-24 07:29:45.896664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.286 [2024-07-24 07:29:45.896688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.286 [2024-07-24 07:29:45.896704] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.286 [2024-07-24 07:29:45.896715] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.286 [2024-07-24 07:29:45.907824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.286 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:45.916609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:45.916668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:45.916699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:45.916713] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:45.916724] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:45.927219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:45.936739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:45.936796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:45.936819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:45.936833] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:45.936845] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:45.947175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:45.956720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:45.956774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:45.956797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:45.956815] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:45.956826] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:45.967493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:45.976831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:45.976885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:45.976909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:45.976927] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:45.976938] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:45.987387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:45.996881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:45.996939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:45.996961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:45.996975] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:45.996986] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:46.007281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:46.017010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:46.017064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:46.017087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:46.017102] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:46.017112] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:46.027330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:46.037008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:46.037065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:46.037092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:46.037105] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:46.037116] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:46.047423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:46.057216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:46.057267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:46.057290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:46.057310] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:46.057321] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:46.067520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:46.077316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:46.077367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:46.077391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:46.077410] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:46.077421] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:46.087797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:46.097176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:46.097231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:46.097255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:46.097268] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:46.097279] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:46.107438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:46.117322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:46.117383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:46.117415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:46.117428] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:46.117444] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:46.128001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:46.137281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:46.137338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:46.137361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:46.137374] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:46.137386] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:46.147594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.545 [2024-07-24 07:29:46.157408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.545 [2024-07-24 07:29:46.157464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.545 [2024-07-24 07:29:46.157488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.545 [2024-07-24 07:29:46.157503] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.545 [2024-07-24 07:29:46.157515] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.545 [2024-07-24 07:29:46.167917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.545 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.177449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.177505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.177529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.177543] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.177554] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.187973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.197462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.197521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.197553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.197567] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.197578] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.207994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.217610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.217873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.217899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.217912] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.217925] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.227846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.237668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.237724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.237747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.237763] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.237774] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.248093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.257645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.257707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.257731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.257744] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.257755] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.268187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.277682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.277737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.277760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.277775] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.277786] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.288200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.297847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.297897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.297921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.297938] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.297949] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.308176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.317811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.317867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.317890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.317907] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.317918] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.328356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.337945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.338001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.338024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.338038] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.338052] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.348427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.357981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.358037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.358061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.805 [2024-07-24 07:29:46.358074] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.805 [2024-07-24 07:29:46.358086] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.805 [2024-07-24 07:29:46.368956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.805 qpair failed and we were unable to recover it. 00:40:31.805 [2024-07-24 07:29:46.378089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.805 [2024-07-24 07:29:46.378141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.805 [2024-07-24 07:29:46.378164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.806 [2024-07-24 07:29:46.378178] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.806 [2024-07-24 07:29:46.378189] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.806 [2024-07-24 07:29:46.388702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.806 qpair failed and we were unable to recover it. 00:40:31.806 [2024-07-24 07:29:46.398189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.806 [2024-07-24 07:29:46.398246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.806 [2024-07-24 07:29:46.398270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.806 [2024-07-24 07:29:46.398284] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.806 [2024-07-24 07:29:46.398295] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.806 [2024-07-24 07:29:46.408376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.806 qpair failed and we were unable to recover it. 00:40:31.806 [2024-07-24 07:29:46.418202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:31.806 [2024-07-24 07:29:46.418263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:31.806 [2024-07-24 07:29:46.418294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:31.806 [2024-07-24 07:29:46.418308] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:31.806 [2024-07-24 07:29:46.418319] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:31.806 [2024-07-24 07:29:46.428734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:31.806 qpair failed and we were unable to recover it. 00:40:32.066 [2024-07-24 07:29:46.438385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.066 [2024-07-24 07:29:46.438443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.066 [2024-07-24 07:29:46.438467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.066 [2024-07-24 07:29:46.438480] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.066 [2024-07-24 07:29:46.438491] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.066 [2024-07-24 07:29:46.448761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.066 qpair failed and we were unable to recover it. 00:40:32.066 [2024-07-24 07:29:46.458336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.066 [2024-07-24 07:29:46.458394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.066 [2024-07-24 07:29:46.458417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.066 [2024-07-24 07:29:46.458437] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.066 [2024-07-24 07:29:46.458448] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.066 [2024-07-24 07:29:46.468961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.066 qpair failed and we were unable to recover it. 00:40:32.066 [2024-07-24 07:29:46.478353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.066 [2024-07-24 07:29:46.478410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.066 [2024-07-24 07:29:46.478446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.066 [2024-07-24 07:29:46.478459] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.066 [2024-07-24 07:29:46.478470] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.066 [2024-07-24 07:29:46.489067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.066 qpair failed and we were unable to recover it. 00:40:32.066 [2024-07-24 07:29:46.498378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.066 [2024-07-24 07:29:46.498437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.066 [2024-07-24 07:29:46.498469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.066 [2024-07-24 07:29:46.498482] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.066 [2024-07-24 07:29:46.498494] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.066 [2024-07-24 07:29:46.508860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.066 qpair failed and we were unable to recover it. 00:40:32.066 [2024-07-24 07:29:46.518488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.066 [2024-07-24 07:29:46.518548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.066 [2024-07-24 07:29:46.518580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.066 [2024-07-24 07:29:46.518593] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.066 [2024-07-24 07:29:46.518605] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.066 [2024-07-24 07:29:46.528779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.066 qpair failed and we were unable to recover it. 00:40:32.066 [2024-07-24 07:29:46.538565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.066 [2024-07-24 07:29:46.538630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.066 [2024-07-24 07:29:46.538657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.066 [2024-07-24 07:29:46.538670] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.066 [2024-07-24 07:29:46.538681] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.066 [2024-07-24 07:29:46.549128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.066 qpair failed and we were unable to recover it. 00:40:32.066 [2024-07-24 07:29:46.558542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.066 [2024-07-24 07:29:46.558596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.066 [2024-07-24 07:29:46.558619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.066 [2024-07-24 07:29:46.558638] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.066 [2024-07-24 07:29:46.558654] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.066 [2024-07-24 07:29:46.569105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.066 qpair failed and we were unable to recover it. 00:40:32.066 [2024-07-24 07:29:46.578663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.066 [2024-07-24 07:29:46.578718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.066 [2024-07-24 07:29:46.578742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.066 [2024-07-24 07:29:46.578755] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.066 [2024-07-24 07:29:46.578767] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.066 [2024-07-24 07:29:46.589143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.066 qpair failed and we were unable to recover it. 00:40:32.066 [2024-07-24 07:29:46.598681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.066 [2024-07-24 07:29:46.598737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.067 [2024-07-24 07:29:46.598761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.067 [2024-07-24 07:29:46.598774] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.067 [2024-07-24 07:29:46.598785] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.067 [2024-07-24 07:29:46.609249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.067 qpair failed and we were unable to recover it. 00:40:32.067 [2024-07-24 07:29:46.618750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.067 [2024-07-24 07:29:46.618809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.067 [2024-07-24 07:29:46.618834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.067 [2024-07-24 07:29:46.618847] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.067 [2024-07-24 07:29:46.618859] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.067 [2024-07-24 07:29:46.629318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.067 qpair failed and we were unable to recover it. 00:40:32.067 [2024-07-24 07:29:46.638783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.067 [2024-07-24 07:29:46.638845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.067 [2024-07-24 07:29:46.638876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.067 [2024-07-24 07:29:46.638889] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.067 [2024-07-24 07:29:46.638900] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.067 [2024-07-24 07:29:46.649394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.067 qpair failed and we were unable to recover it. 00:40:32.067 [2024-07-24 07:29:46.658775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.067 [2024-07-24 07:29:46.658836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.067 [2024-07-24 07:29:46.658868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.067 [2024-07-24 07:29:46.658882] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.067 [2024-07-24 07:29:46.658893] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.067 [2024-07-24 07:29:46.669312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.067 qpair failed and we were unable to recover it. 00:40:32.067 [2024-07-24 07:29:46.681542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.067 [2024-07-24 07:29:46.681597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.067 [2024-07-24 07:29:46.681621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.067 [2024-07-24 07:29:46.681645] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.067 [2024-07-24 07:29:46.681656] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.067 [2024-07-24 07:29:46.689316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.067 qpair failed and we were unable to recover it. 00:40:32.327 [2024-07-24 07:29:46.698945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.327 [2024-07-24 07:29:46.699000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.327 [2024-07-24 07:29:46.699024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.327 [2024-07-24 07:29:46.699038] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.327 [2024-07-24 07:29:46.699050] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.327 [2024-07-24 07:29:46.709632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.327 qpair failed and we were unable to recover it. 00:40:32.327 [2024-07-24 07:29:46.718948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.327 [2024-07-24 07:29:46.719001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.327 [2024-07-24 07:29:46.719025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.327 [2024-07-24 07:29:46.719043] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.327 [2024-07-24 07:29:46.719055] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.327 [2024-07-24 07:29:46.729461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.327 qpair failed and we were unable to recover it. 00:40:32.327 [2024-07-24 07:29:46.738933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.327 [2024-07-24 07:29:46.738993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.327 [2024-07-24 07:29:46.739024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.327 [2024-07-24 07:29:46.739041] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.327 [2024-07-24 07:29:46.739052] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.327 [2024-07-24 07:29:46.749503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.327 qpair failed and we were unable to recover it. 00:40:32.327 [2024-07-24 07:29:46.759180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.327 [2024-07-24 07:29:46.759236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.327 [2024-07-24 07:29:46.759260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.327 [2024-07-24 07:29:46.759275] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.327 [2024-07-24 07:29:46.759286] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.327 [2024-07-24 07:29:46.770023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.327 qpair failed and we were unable to recover it. 00:40:32.327 [2024-07-24 07:29:46.779182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.327 [2024-07-24 07:29:46.779236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.327 [2024-07-24 07:29:46.779260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.327 [2024-07-24 07:29:46.779273] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.327 [2024-07-24 07:29:46.779284] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.327 [2024-07-24 07:29:46.789719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.327 qpair failed and we were unable to recover it. 00:40:32.327 [2024-07-24 07:29:46.799248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.327 [2024-07-24 07:29:46.799302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.327 [2024-07-24 07:29:46.799325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.327 [2024-07-24 07:29:46.799341] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.327 [2024-07-24 07:29:46.799352] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.327 [2024-07-24 07:29:46.809657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.327 qpair failed and we were unable to recover it. 00:40:32.327 [2024-07-24 07:29:46.819263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.327 [2024-07-24 07:29:46.819320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.327 [2024-07-24 07:29:46.819352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.327 [2024-07-24 07:29:46.819366] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.327 [2024-07-24 07:29:46.819377] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.327 [2024-07-24 07:29:46.829722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.327 qpair failed and we were unable to recover it. 00:40:32.327 [2024-07-24 07:29:46.839384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.327 [2024-07-24 07:29:46.839442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.327 [2024-07-24 07:29:46.839466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.327 [2024-07-24 07:29:46.839479] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.327 [2024-07-24 07:29:46.839490] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.327 [2024-07-24 07:29:46.849672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.327 qpair failed and we were unable to recover it. 00:40:32.327 [2024-07-24 07:29:46.859396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.327 [2024-07-24 07:29:46.859448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.327 [2024-07-24 07:29:46.859472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.328 [2024-07-24 07:29:46.859490] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.328 [2024-07-24 07:29:46.859502] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.328 [2024-07-24 07:29:46.869815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.328 qpair failed and we were unable to recover it. 00:40:32.328 [2024-07-24 07:29:46.879370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.328 [2024-07-24 07:29:46.879424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.328 [2024-07-24 07:29:46.879448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.328 [2024-07-24 07:29:46.879463] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.328 [2024-07-24 07:29:46.879474] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.328 [2024-07-24 07:29:46.889942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.328 qpair failed and we were unable to recover it. 00:40:32.328 [2024-07-24 07:29:46.899524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.328 [2024-07-24 07:29:46.899573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.328 [2024-07-24 07:29:46.899597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.328 [2024-07-24 07:29:46.899610] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.328 [2024-07-24 07:29:46.899621] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.328 [2024-07-24 07:29:46.909969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.328 qpair failed and we were unable to recover it. 00:40:32.328 [2024-07-24 07:29:46.919607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.328 [2024-07-24 07:29:46.919668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.328 [2024-07-24 07:29:46.919703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.328 [2024-07-24 07:29:46.919717] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.328 [2024-07-24 07:29:46.919728] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.328 [2024-07-24 07:29:46.930168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.328 qpair failed and we were unable to recover it. 00:40:32.328 [2024-07-24 07:29:46.939717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.328 [2024-07-24 07:29:46.939776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.328 [2024-07-24 07:29:46.939806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.328 [2024-07-24 07:29:46.939821] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.328 [2024-07-24 07:29:46.939833] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:32.328 [2024-07-24 07:29:46.949990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:32.328 qpair failed and we were unable to recover it. 00:40:32.328 [2024-07-24 07:29:46.950284] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:40:32.328 A controller has encountered a failure and is being reset. 00:40:32.587 [2024-07-24 07:29:46.959810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.587 [2024-07-24 07:29:46.959882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.587 [2024-07-24 07:29:46.959921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.587 [2024-07-24 07:29:46.959942] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.587 [2024-07-24 07:29:46.959961] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:40:32.587 [2024-07-24 07:29:46.970178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:32.587 qpair failed and we were unable to recover it. 00:40:32.587 [2024-07-24 07:29:46.979857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:32.587 [2024-07-24 07:29:46.979925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:32.587 [2024-07-24 07:29:46.979951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:32.587 [2024-07-24 07:29:46.979967] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:32.587 [2024-07-24 07:29:46.979979] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:40:32.587 [2024-07-24 07:29:46.991321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:32.587 qpair failed and we were unable to recover it. 00:40:32.587 [2024-07-24 07:29:46.991673] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:40:32.587 [2024-07-24 07:29:47.034409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:32.587 Controller properly reset. 00:40:32.847 Initializing NVMe Controllers 00:40:32.847 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:40:32.847 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:40:32.847 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:40:32.847 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:40:32.847 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:40:32.847 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:40:32.847 Initialization complete. Launching workers. 00:40:32.847 Starting thread on core 1 00:40:32.847 Starting thread on core 2 00:40:32.847 Starting thread on core 3 00:40:32.847 Starting thread on core 0 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:40:32.847 00:40:32.847 real 0m12.104s 00:40:32.847 user 0m26.071s 00:40:32.847 sys 0m2.806s 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:32.847 ************************************ 00:40:32.847 END TEST nvmf_target_disconnect_tc2 00:40:32.847 ************************************ 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:32.847 ************************************ 00:40:32.847 START TEST nvmf_target_disconnect_tc3 00:40:32.847 ************************************ 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc3 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1922604 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:40:32.847 07:29:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:40:33.105 EAL: No free 2048 kB hugepages reported on node 1 00:40:35.011 07:29:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1921496 00:40:35.011 07:29:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Read completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.385 Write completed with error (sct=0, sc=8) 00:40:36.385 starting I/O failed 00:40:36.386 Read completed with error (sct=0, sc=8) 00:40:36.386 starting I/O failed 00:40:36.386 Read completed with error (sct=0, sc=8) 00:40:36.386 starting I/O failed 00:40:36.386 Read completed with error (sct=0, sc=8) 00:40:36.386 starting I/O failed 00:40:36.386 [2024-07-24 07:29:50.668465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:36.989 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1921496 Killed "${NVMF_APP[@]}" "$@" 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1923164 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1923164 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1923164 ']' 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:36.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:36.989 07:29:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:36.989 [2024-07-24 07:29:51.476329] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:40:36.989 [2024-07-24 07:29:51.476424] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:36.989 EAL: No free 2048 kB hugepages reported on node 1 00:40:37.248 [2024-07-24 07:29:51.651210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:37.248 Write completed with error (sct=0, sc=8) 00:40:37.248 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Read completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 Write completed with error (sct=0, sc=8) 00:40:37.249 starting I/O failed 00:40:37.249 [2024-07-24 07:29:51.673867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:37.249 [2024-07-24 07:29:51.864539] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:37.249 [2024-07-24 07:29:51.864584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:37.249 [2024-07-24 07:29:51.864600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:37.249 [2024-07-24 07:29:51.864611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:37.249 [2024-07-24 07:29:51.864623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:37.249 [2024-07-24 07:29:51.864812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:40:37.249 [2024-07-24 07:29:51.864895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:40:37.249 [2024-07-24 07:29:51.864961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:40:37.249 [2024-07-24 07:29:51.864992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # return 0 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:37.817 Malloc0 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.817 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:37.817 [2024-07-24 07:29:52.431715] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7f9c12d3f940) succeed. 00:40:37.817 [2024-07-24 07:29:52.441501] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7f9c12cf9940) succeed. 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Read completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 Write completed with error (sct=0, sc=8) 00:40:38.077 starting I/O failed 00:40:38.077 [2024-07-24 07:29:52.679422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:38.077 [2024-07-24 07:29:52.681193] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:38.077 [2024-07-24 07:29:52.681227] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:38.077 [2024-07-24 07:29:52.681240] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:38.336 [2024-07-24 07:29:52.780048] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.336 07:29:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1922604 00:40:39.274 [2024-07-24 07:29:53.685405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:39.274 qpair failed and we were unable to recover it. 00:40:39.274 [2024-07-24 07:29:53.687093] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:39.274 [2024-07-24 07:29:53.687122] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:39.274 [2024-07-24 07:29:53.687135] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:40:40.208 [2024-07-24 07:29:54.691151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:40.208 qpair failed and we were unable to recover it. 00:40:40.208 [2024-07-24 07:29:54.692993] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:40.208 [2024-07-24 07:29:54.693022] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:40.208 [2024-07-24 07:29:54.693035] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:40:41.144 [2024-07-24 07:29:55.697181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:41.144 qpair failed and we were unable to recover it. 00:40:41.144 [2024-07-24 07:29:55.698881] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:41.144 [2024-07-24 07:29:55.698914] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:41.144 [2024-07-24 07:29:55.698927] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:40:42.081 [2024-07-24 07:29:56.703039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:42.081 qpair failed and we were unable to recover it. 00:40:42.081 [2024-07-24 07:29:56.704917] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:42.081 [2024-07-24 07:29:56.704956] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:42.081 [2024-07-24 07:29:56.704969] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:40:43.459 [2024-07-24 07:29:57.708950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:43.459 qpair failed and we were unable to recover it. 00:40:43.459 [2024-07-24 07:29:57.710641] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:43.459 [2024-07-24 07:29:57.710671] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:43.459 [2024-07-24 07:29:57.710684] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:40:44.398 [2024-07-24 07:29:58.714664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:44.398 qpair failed and we were unable to recover it. 00:40:44.398 [2024-07-24 07:29:58.716649] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:44.398 [2024-07-24 07:29:58.716691] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:44.398 [2024-07-24 07:29:58.716704] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:40:45.336 [2024-07-24 07:29:59.720825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.336 qpair failed and we were unable to recover it. 00:40:45.336 [2024-07-24 07:29:59.722677] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:45.336 [2024-07-24 07:29:59.722706] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:45.336 [2024-07-24 07:29:59.722719] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:40:46.272 [2024-07-24 07:30:00.726783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:46.272 qpair failed and we were unable to recover it. 00:40:46.272 [2024-07-24 07:30:00.728751] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:46.272 [2024-07-24 07:30:00.728798] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:46.272 [2024-07-24 07:30:00.728815] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:40:47.209 [2024-07-24 07:30:01.732916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:47.209 qpair failed and we were unable to recover it. 00:40:47.209 [2024-07-24 07:30:01.734525] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:47.209 [2024-07-24 07:30:01.734553] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:47.209 [2024-07-24 07:30:01.734565] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:40:48.145 [2024-07-24 07:30:02.738677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:48.145 qpair failed and we were unable to recover it. 00:40:49.518 Read completed with error (sct=0, sc=8) 00:40:49.518 starting I/O failed 00:40:49.518 Read completed with error (sct=0, sc=8) 00:40:49.518 starting I/O failed 00:40:49.518 Read completed with error (sct=0, sc=8) 00:40:49.518 starting I/O failed 00:40:49.518 Read completed with error (sct=0, sc=8) 00:40:49.518 starting I/O failed 00:40:49.518 Write completed with error (sct=0, sc=8) 00:40:49.518 starting I/O failed 00:40:49.518 Read completed with error (sct=0, sc=8) 00:40:49.518 starting I/O failed 00:40:49.518 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Write completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 Read completed with error (sct=0, sc=8) 00:40:49.519 starting I/O failed 00:40:49.519 [2024-07-24 07:30:03.744587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:49.519 [2024-07-24 07:30:03.744683] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:40:49.519 A controller has encountered a failure and is being reset. 00:40:49.519 Resorting to new failover address 192.168.100.9 00:40:49.519 [2024-07-24 07:30:03.746476] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:49.519 [2024-07-24 07:30:03.746505] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:49.519 [2024-07-24 07:30:03.746517] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:50.455 [2024-07-24 07:30:04.750749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:50.455 qpair failed and we were unable to recover it. 00:40:50.455 [2024-07-24 07:30:04.752715] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:50.455 [2024-07-24 07:30:04.752743] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:50.455 [2024-07-24 07:30:04.752756] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:40:51.392 [2024-07-24 07:30:05.756983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:51.392 qpair failed and we were unable to recover it. 00:40:51.392 [2024-07-24 07:30:05.757219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:51.392 [2024-07-24 07:30:05.757380] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:40:51.392 [2024-07-24 07:30:05.803946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:51.392 Controller properly reset. 00:40:51.392 Initializing NVMe Controllers 00:40:51.392 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:40:51.392 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:40:51.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:40:51.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:40:51.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:40:51.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:40:51.392 Initialization complete. Launching workers. 00:40:51.392 Starting thread on core 1 00:40:51.392 Starting thread on core 2 00:40:51.392 Starting thread on core 3 00:40:51.392 Starting thread on core 0 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:40:51.652 00:40:51.652 real 0m18.670s 00:40:51.652 user 0m59.840s 00:40:51.652 sys 0m5.057s 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:51.652 ************************************ 00:40:51.652 END TEST nvmf_target_disconnect_tc3 00:40:51.652 ************************************ 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:40:51.652 rmmod nvme_rdma 00:40:51.652 rmmod nvme_fabrics 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1923164 ']' 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1923164 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1923164 ']' 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1923164 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1923164 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1923164' 00:40:51.652 killing process with pid 1923164 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1923164 00:40:51.652 07:30:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1923164 00:40:54.250 07:30:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:54.250 07:30:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:40:54.250 00:40:54.250 real 0m42.382s 00:40:54.250 user 2m31.756s 00:40:54.250 sys 0m14.821s 00:40:54.250 07:30:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:54.250 07:30:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:54.250 ************************************ 00:40:54.250 END TEST nvmf_target_disconnect 00:40:54.250 ************************************ 00:40:54.250 07:30:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:54.250 00:40:54.250 real 8m14.378s 00:40:54.250 user 22m49.999s 00:40:54.250 sys 2m3.394s 00:40:54.250 07:30:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:54.250 07:30:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.250 ************************************ 00:40:54.250 END TEST nvmf_host 00:40:54.250 ************************************ 00:40:54.250 00:40:54.250 real 31m41.666s 00:40:54.250 user 88m35.672s 00:40:54.250 sys 7m31.909s 00:40:54.250 07:30:08 nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:54.250 07:30:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:40:54.250 ************************************ 00:40:54.250 END TEST nvmf_rdma 00:40:54.250 ************************************ 00:40:54.250 07:30:08 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:40:54.250 07:30:08 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:54.250 07:30:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:54.250 07:30:08 -- common/autotest_common.sh@10 -- # set +x 00:40:54.250 ************************************ 00:40:54.250 START TEST spdkcli_nvmf_rdma 00:40:54.250 ************************************ 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:40:54.250 * Looking for test storage... 00:40:54.250 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:40:54.250 07:30:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:54.251 07:30:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1926600 00:40:54.251 07:30:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1926600 00:40:54.251 07:30:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@829 -- # '[' -z 1926600 ']' 00:40:54.251 07:30:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:54.251 07:30:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:54.251 07:30:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:54.251 07:30:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:54.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:54.251 07:30:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:54.251 07:30:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:40:54.251 [2024-07-24 07:30:08.661828] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:40:54.251 [2024-07-24 07:30:08.661925] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926600 ] 00:40:54.251 EAL: No free 2048 kB hugepages reported on node 1 00:40:54.251 [2024-07-24 07:30:08.811213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:54.510 [2024-07-24 07:30:09.023452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.510 [2024-07-24 07:30:09.023462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # return 0 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:40:55.078 07:30:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:41:03.189 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:41:03.189 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:41:03.189 Found net devices under 0000:d9:00.0: mlx_0_0 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:41:03.189 Found net devices under 0000:d9:00.1: mlx_0_1 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:41:03.189 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:41:03.190 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:41:03.190 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:41:03.190 altname enp217s0f0np0 00:41:03.190 altname ens818f0np0 00:41:03.190 inet 192.168.100.8/24 scope global mlx_0_0 00:41:03.190 valid_lft forever preferred_lft forever 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:41:03.190 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:41:03.190 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:41:03.190 altname enp217s0f1np1 00:41:03.190 altname ens818f1np1 00:41:03.190 inet 192.168.100.9/24 scope global mlx_0_1 00:41:03.190 valid_lft forever preferred_lft forever 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:41:03.190 192.168.100.9' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:41:03.190 192.168.100.9' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:41:03.190 192.168.100.9' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:03.190 07:30:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:03.190 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:03.190 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:03.190 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:03.190 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:03.190 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:03.190 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:03.190 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:41:03.190 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:41:03.190 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:03.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:03.190 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:03.190 ' 00:41:06.477 [2024-07-24 07:30:20.391523] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002b840/0x7f29de726940) succeed. 00:41:06.477 [2024-07-24 07:30:20.401549] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002b9c0/0x7f29ddbbd940) succeed. 00:41:07.414 [2024-07-24 07:30:21.727944] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:41:09.318 [2024-07-24 07:30:23.902846] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:41:11.217 [2024-07-24 07:30:25.777120] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:41:12.593 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:12.593 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:12.593 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:12.593 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:12.593 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:12.593 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:12.593 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:12.593 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:41:12.593 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:41:12.593 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:41:12.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:12.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:12.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:12.594 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:12.852 07:30:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:12.852 07:30:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:12.852 07:30:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:12.852 07:30:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:12.852 07:30:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:12.852 07:30:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:12.852 07:30:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:41:12.852 07:30:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:13.111 07:30:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:13.370 07:30:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:13.370 07:30:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:13.370 07:30:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:13.370 07:30:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:13.370 07:30:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:13.370 07:30:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:13.370 07:30:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:13.370 07:30:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:13.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:13.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:13.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:13.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:41:13.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:41:13.370 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:13.370 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:13.370 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:13.370 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:13.370 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:13.370 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:13.370 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:13.370 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:13.370 ' 00:41:20.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:20.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:20.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:20.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:20.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:41:20.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:41:20.004 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:20.004 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:20.004 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:20.004 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:20.004 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:20.004 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:20.004 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:20.004 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1926600 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@948 -- # '[' -z 1926600 ']' 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # kill -0 1926600 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # uname 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1926600 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1926600' 00:41:20.004 killing process with pid 1926600 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # kill 1926600 00:41:20.004 07:30:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # wait 1926600 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:41:20.940 rmmod nvme_rdma 00:41:20.940 rmmod nvme_fabrics 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:41:20.940 00:41:20.940 real 0m26.860s 00:41:20.940 user 0m56.183s 00:41:20.940 sys 0m7.531s 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:20.940 07:30:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:20.940 ************************************ 00:41:20.940 END TEST spdkcli_nvmf_rdma 00:41:20.940 ************************************ 00:41:20.940 07:30:35 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:41:20.940 07:30:35 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:41:20.940 07:30:35 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:41:20.940 07:30:35 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:41:20.940 07:30:35 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:41:20.940 07:30:35 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:41:20.940 07:30:35 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:41:20.940 07:30:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:20.940 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:41:20.940 07:30:35 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:41:20.940 07:30:35 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:41:20.940 07:30:35 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:41:20.940 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:41:27.509 INFO: APP EXITING 00:41:27.509 INFO: killing all VMs 00:41:27.509 INFO: killing vhost app 00:41:27.509 WARN: no vhost pid file found 00:41:27.509 INFO: EXIT DONE 00:41:30.794 Waiting for block devices as requested 00:41:30.794 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:30.794 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:30.794 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:30.794 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:30.794 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:30.794 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:30.794 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:31.052 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:31.052 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:31.052 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:31.311 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:31.311 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:31.311 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:31.311 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:31.569 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:31.569 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:31.827 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:41:36.015 Cleaning 00:41:36.015 Removing: /var/run/dpdk/spdk0/config 00:41:36.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:36.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:36.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:36.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:36.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:36.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:36.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:36.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:36.015 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:36.015 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:36.015 Removing: /var/run/dpdk/spdk1/config 00:41:36.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:36.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:36.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:36.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:36.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:36.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:36.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:36.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:36.015 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:36.015 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:36.015 Removing: /var/run/dpdk/spdk1/mp_socket 00:41:36.015 Removing: /var/run/dpdk/spdk2/config 00:41:36.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:36.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:36.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:36.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:36.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:36.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:36.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:36.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:36.015 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:36.015 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:36.015 Removing: /var/run/dpdk/spdk3/config 00:41:36.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:36.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:36.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:36.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:36.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:36.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:36.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:36.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:36.015 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:36.016 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:36.016 Removing: /var/run/dpdk/spdk4/config 00:41:36.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:36.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:36.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:36.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:36.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:36.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:36.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:36.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:36.016 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:36.016 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:36.016 Removing: /dev/shm/bdevperf_trace.pid1510093 00:41:36.016 Removing: /dev/shm/bdev_svc_trace.1 00:41:36.016 Removing: /dev/shm/nvmf_trace.0 00:41:36.016 Removing: /dev/shm/spdk_tgt_trace.pid1436682 00:41:36.016 Removing: /var/run/dpdk/spdk0 00:41:36.016 Removing: /var/run/dpdk/spdk1 00:41:36.016 Removing: /var/run/dpdk/spdk2 00:41:36.016 Removing: /var/run/dpdk/spdk3 00:41:36.016 Removing: /var/run/dpdk/spdk4 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1432333 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1434127 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1436682 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1437830 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1439068 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1439820 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1441204 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1441476 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1442134 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1448230 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1449952 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1450807 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1451523 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1452282 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1453140 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1453437 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1453726 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1454220 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1455151 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1458569 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1459331 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1459974 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1460242 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1462139 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1462411 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1464304 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1464578 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1465146 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1465413 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1466088 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1466381 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1468238 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1468566 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1469113 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1469691 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1469988 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1470456 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1470864 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1471405 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1471924 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1472249 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1472799 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1473340 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1473656 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1474188 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1474736 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1475117 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1475577 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1476129 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1476670 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1477028 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1477516 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1478063 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1478524 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1478918 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1479469 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1480017 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1480355 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1481221 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1486534 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1491782 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1503089 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1503931 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1510093 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1510509 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1516328 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1523230 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1526029 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1538719 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1568969 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1574246 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1677231 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1683399 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1690590 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1701491 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1750248 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1755274 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1802214 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1803996 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1806031 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1811554 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1820581 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1821759 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1822968 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1824059 00:41:36.016 Removing: /var/run/dpdk/spdk_pid1824558 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1830336 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1830339 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1835890 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1836449 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1837115 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1837886 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1838014 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1840425 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1842577 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1844673 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1846532 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1848383 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1850238 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1857215 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1857762 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1860138 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1861484 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1869783 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1872705 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1879455 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1890892 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1890907 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1912744 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1913281 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1920358 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1920723 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1922604 00:41:36.274 Removing: /var/run/dpdk/spdk_pid1926600 00:41:36.274 Clean 00:41:36.274 07:30:50 -- common/autotest_common.sh@1449 -- # return 0 00:41:36.274 07:30:50 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:41:36.274 07:30:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:36.274 07:30:50 -- common/autotest_common.sh@10 -- # set +x 00:41:36.532 07:30:50 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:41:36.532 07:30:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:36.532 07:30:50 -- common/autotest_common.sh@10 -- # set +x 00:41:36.532 07:30:50 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:41:36.532 07:30:50 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:41:36.532 07:30:50 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:41:36.532 07:30:50 -- spdk/autotest.sh@391 -- # hash lcov 00:41:36.532 07:30:50 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:41:36.532 07:30:50 -- spdk/autotest.sh@393 -- # hostname 00:41:36.532 07:30:50 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:41:36.532 geninfo: WARNING: invalid characters removed from testname! 00:41:58.447 07:31:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:41:58.447 07:31:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:00.347 07:31:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:01.723 07:31:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:03.622 07:31:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:04.998 07:31:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:06.900 07:31:21 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:06.900 07:31:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:42:06.900 07:31:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:06.900 07:31:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.900 07:31:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.900 07:31:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.900 07:31:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.900 07:31:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.900 07:31:21 -- paths/export.sh@5 -- $ export PATH 00:42:06.900 07:31:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.900 07:31:21 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:42:06.900 07:31:21 -- common/autobuild_common.sh@447 -- $ date +%s 00:42:06.900 07:31:21 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721799081.XXXXXX 00:42:06.900 07:31:21 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721799081.Xe7nn1 00:42:06.900 07:31:21 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:42:06.900 07:31:21 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:42:06.900 07:31:21 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:42:06.900 07:31:21 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:42:06.900 07:31:21 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:42:06.900 07:31:21 -- common/autobuild_common.sh@463 -- $ get_config_params 00:42:06.900 07:31:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:42:06.900 07:31:21 -- common/autotest_common.sh@10 -- $ set +x 00:42:06.900 07:31:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:42:06.900 07:31:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:42:06.900 07:31:21 -- pm/common@17 -- $ local monitor 00:42:06.900 07:31:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:06.900 07:31:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:06.900 07:31:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:06.900 07:31:21 -- pm/common@21 -- $ date +%s 00:42:06.900 07:31:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:06.900 07:31:21 -- pm/common@21 -- $ date +%s 00:42:06.900 07:31:21 -- pm/common@25 -- $ sleep 1 00:42:06.900 07:31:21 -- pm/common@21 -- $ date +%s 00:42:06.900 07:31:21 -- pm/common@21 -- $ date +%s 00:42:06.900 07:31:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721799081 00:42:06.900 07:31:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721799081 00:42:06.900 07:31:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721799081 00:42:06.900 07:31:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721799081 00:42:06.900 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721799081_collect-vmstat.pm.log 00:42:06.900 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721799081_collect-cpu-load.pm.log 00:42:06.900 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721799081_collect-cpu-temp.pm.log 00:42:06.900 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721799081_collect-bmc-pm.bmc.pm.log 00:42:07.835 07:31:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:42:07.835 07:31:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:42:07.836 07:31:22 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:42:07.836 07:31:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:42:07.836 07:31:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:42:07.836 07:31:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:42:07.836 07:31:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:42:07.836 07:31:22 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:07.836 07:31:22 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:42:07.836 07:31:22 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:42:07.836 07:31:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:42:07.836 07:31:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:42:07.836 07:31:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:42:07.836 07:31:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:42:07.836 07:31:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:07.836 07:31:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:42:07.836 07:31:22 -- pm/common@44 -- $ pid=1949158 00:42:07.836 07:31:22 -- pm/common@50 -- $ kill -TERM 1949158 00:42:07.836 07:31:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:07.836 07:31:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:42:07.836 07:31:22 -- pm/common@44 -- $ pid=1949160 00:42:07.836 07:31:22 -- pm/common@50 -- $ kill -TERM 1949160 00:42:07.836 07:31:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:07.836 07:31:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:42:07.836 07:31:22 -- pm/common@44 -- $ pid=1949162 00:42:07.836 07:31:22 -- pm/common@50 -- $ kill -TERM 1949162 00:42:07.836 07:31:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:07.836 07:31:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:42:07.836 07:31:22 -- pm/common@44 -- $ pid=1949186 00:42:07.836 07:31:22 -- pm/common@50 -- $ sudo -E kill -TERM 1949186 00:42:07.836 + [[ -n 1314294 ]] 00:42:07.836 + sudo kill 1314294 00:42:08.104 [Pipeline] } 00:42:08.124 [Pipeline] // stage 00:42:08.130 [Pipeline] } 00:42:08.149 [Pipeline] // timeout 00:42:08.153 [Pipeline] } 00:42:08.170 [Pipeline] // catchError 00:42:08.174 [Pipeline] } 00:42:08.190 [Pipeline] // wrap 00:42:08.195 [Pipeline] } 00:42:08.209 [Pipeline] // catchError 00:42:08.218 [Pipeline] stage 00:42:08.220 [Pipeline] { (Epilogue) 00:42:08.233 [Pipeline] catchError 00:42:08.235 [Pipeline] { 00:42:08.248 [Pipeline] echo 00:42:08.250 Cleanup processes 00:42:08.256 [Pipeline] sh 00:42:08.543 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:42:08.543 1949264 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:42:08.543 1949605 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:42:08.558 [Pipeline] sh 00:42:08.843 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:42:08.843 ++ grep -v 'sudo pgrep' 00:42:08.843 ++ awk '{print $1}' 00:42:08.843 + sudo kill -9 1949264 00:42:08.895 [Pipeline] sh 00:42:09.186 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:09.186 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:42:14.462 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:42:18.665 [Pipeline] sh 00:42:18.949 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:18.949 Artifacts sizes are good 00:42:18.962 [Pipeline] archiveArtifacts 00:42:18.969 Archiving artifacts 00:42:19.132 [Pipeline] sh 00:42:19.415 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:42:19.428 [Pipeline] cleanWs 00:42:19.437 [WS-CLEANUP] Deleting project workspace... 00:42:19.437 [WS-CLEANUP] Deferred wipeout is used... 00:42:19.443 [WS-CLEANUP] done 00:42:19.445 [Pipeline] } 00:42:19.464 [Pipeline] // catchError 00:42:19.475 [Pipeline] sh 00:42:19.757 + logger -p user.info -t JENKINS-CI 00:42:19.766 [Pipeline] } 00:42:19.781 [Pipeline] // stage 00:42:19.786 [Pipeline] } 00:42:19.802 [Pipeline] // node 00:42:19.807 [Pipeline] End of Pipeline 00:42:19.845 Finished: SUCCESS